[Columbus] Columbus Digest, Vol 128, Issue 3
Aleksandr Zaichenko
alex.zai90 at gmail.com
Tue Mar 25 11:44:30 EDT 2025
Dear Thomas,
Yes. It is weird in my opinion too. Initially I tried to compute the
spin-orbit coupling in this active space and have encountered the problem,
after that I tried just to compute this septet ground state and met the
same issue. I am not sure that the MCSCF is a good alternative for my case,
however I don't use the MR-CISD in parallel. I don't see any obvious
problem in my calculation setups, moreover all beyond the active space is
frozen and any additional excitations are impossible. I can send you the
logs or inputs or boths, if you want. Can I have something wrong with the
installation or something goes wrong in the program such as infinite loop
arizing? I have really no idea how such memory demand can occur. Should I
try something else? This problem does not look good.
Best,
Aleksandr
вт, 25 мар. 2025 г. в 05:16, Thomas Mueller <th.mueller at fz-juelich.de>:
> On Mon, 24 Mar 2025, Aleksandr Zaichenko via Columbus wrote:
> Dear Aleksandr,
>
> this looks quite weird. There is no way to produce a 1.2 TB diagonal
> integral file ( this would amount to a basis of
> approx. sqrt(1.2*10**12/(8))=38*10**4 ).
>
> Since you have probably defined a reference space of 38760 CSFs,
> the program should not try to use the standard dsyevx/dsyevd eigensolver
> to extract the lowest root which would require (38760**2)*8 bytes of
> memory plus say 20% overhead but instead use iterative reference space
> diagonalization.
> The alternative is to just start from the unit vector with the smallest
> diagonal matrix element - technically there is no difference.
>
> CASCI is not really the target of the MR-CISD code - although of course
> one can use it. The only charming feature is that the iterative CASCI
> calculation is parallel (provided it is switched on).
>
> BTW, for CASCI(20o,6e) without symmetry and S=0
> the size of the CSF space should be
> 379050 CSFs (= 21!/(3!18!) * 21!/(4!17!) * 1/21
>
> The alternative is to use the mcscf code, run a single macro iteration,
> switch off orbital rotations (which ought also switch off the explicit
> construction of the Hessian matrix components B,C and M) and set to
> iterative solution of the CI problem. This should also force the AO-MO
> trafo step to just compute the all-internal MOs.
>
> Beyond about 10**6 CSFs a CASCI calculation with MR-CISD is less
> useful, since the formula tape construction is getting more and more
> dominant and the matrix element construction is scalar (i.e. not fast
> but with sufficient patience you can go to 10**9 CSFs - but imagine
> running FCI with a general MR-CISD code is not that efficient).
> The MCSCF code is somewhat better positioned regarding this special
> application. Despite of all shortcomings DMRG (especially CheMPS2) does
> a good job here.
>
> The documentation in $COLUMBUS/docs/fulldoc/ciudg.txt,
> $COLUMBUS/docs/fulldoc/mcscf.txt hopefully clarifies the various
> input options.
>
> Best,
>
> Thomas
>
>
> > Dear Hans, Yes. Correct. It is the MR-CISD program. Excitations in the
> virtual space are 0, moreover all
> > virtual orbitals as well as lower are frozen. The problem size from
> cidrtls is: total: 38760 and it seems
> > correct. The growth of the memory
> > Dear Hans,
> >
> > Yes. Correct. It is the MR-CISD program. Excitations in the virtual
> space are 0, moreover all virtual orbitals
> > as well as lower are frozen.
> >
> > The problem size from cidrtls is:
> >
> > total: 38760
> >
> > and it seems correct. The growth of the memory demand occurs in the
> diagint file up to 1.2 Tb and calculation
> > falls with SIGSEGV.
> >
> > It should not happen with the CI-problem of that dimension including
> only one root with SO, as I know. I can
> > share listings, if you will need it.
> >
> > Best,
> > Aleksandr
> >
> >
> >
> > On Mon, Mar 24, 2025 at 6:46 PM <columbus-request at lists.osc.edu> wrote:
> > Send Columbus mailing list submissions to
> > columbus at lists.osc.edu
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://lists.osu.edu/mailman/listinfo/columbus
> > or, via email, send a message with subject or body 'help' to
> > columbus-request at lists.osc.edu
> >
> > You can reach the person managing the list at
> > columbus-owner at lists.osc.edu
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Columbus digest..."
> >
> >
> > Today's Topics:
> >
> > 1. CASCI extraordinary memory demand (Aleksandr Zaichenko)
> > 2. Re: CASCI extraordinary memory demand (Hans Lischka)
> >
> >
> >
> ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Mon, 24 Mar 2025 15:36:14 -0400
> > From: Aleksandr Zaichenko <alex.zai90 at gmail.com>
> > To: columbus at lists.osc.edu
> > Subject: [Columbus] CASCI extraordinary memory demand
> > Message-ID:
> > <CAAJgZKtK7jg7Vij3mJuNcCrShToEQrxkgJZfQ=
> BKkgUz6Qvc9A at mail.gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Dear Columbus Team,
> >
> > I have a question about an ressources requirement issue with the
> CI module
> > for the CASCI calculation. A calculation with active space
> CAS(6,20)
> > without any external excitations (all lower orbitals are frozen,
> all higher
> > orbitals are removed) contains 38760 CI-vectors (as should be) and
> I should
> > get an CI-only solution for only one first root. Most QC programs
> do this
> > calculation without any problem, but the Columbus CI module
> requires more
> > than 1 Tb memory for that and it seems very demanding. Why is it
> so? Do I
> > understand correctly, that the program tries to construct a
> full-CI problem
> > for all of the 38760 roots despite 1 root being required?
> >
> > Best,
> > Aleksandr
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> http://lists.osu.edu/pipermail/columbus/attachments/20250324/b257d5cf/attachment-0001.html
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Mon, 24 Mar 2025 17:45:30 -0500
> > From: Hans Lischka <hans.lischka at univie.ac.at>
> > To: columbus at lists.osc.edu
> > Subject: Re: [Columbus] CASCI extraordinary memory demand
> > Message-ID: <bc527a53-1a8d-4bea-b08a-30e2d1f800d1 at univie.ac.at>
> > Content-Type: text/plain; charset="utf-8"; Format="flowed"
> >
> > Hi Aleksandr,
> > I am not sure what kind of calculation you really did. The CI
> module is
> > meant to do a MR-CISD calculation. I assume you took the CAS(6,20)
> as
> > reference? How did you proceed further? The CI program will want to
> > create all SD excitation from the 38760 configurations. Did you
> opt for
> > that? This would explaining the 1TB of memory required. Look into
> the
> > cidrtls file. There you will see the CI dimension. If you want the
> CASCI
> > calculation you should input zero excitations (into the virtual
> space).
> > Please let e know what you did.
> > Best regards, Hans
> >
> > On 3/24/2025 2:36 PM, Aleksandr Zaichenko via Columbus wrote:
> > > Dear Columbus Team, I have a question about an ressources
> requirement
> > > issue with the CI module for the CASCI calculation. A
> calculation with
> > > active space CAS(6,20) without any external excitations (all
> lower
> > > orbitals are frozen, all higher orbitals
> > > Dear Columbus Team,
> > >
> > > I have a question about an ressources requirement issue with the
> CI
> > > module for the CASCI calculation. A calculation with active space
> > > CAS(6,20) without any external excitations (all lower orbitals
> are
> > > frozen, all higher orbitals are removed) contains 38760
> CI-vectors (as
> > > should be) and I should get an CI-only solution for only one
> first
> > > root. Most QC programs do this calculation without any problem,
> but
> > > the Columbus CI module requires more than 1 Tb memory for that
> and it
> > > seems very demanding. Why is it so? Do I understand correctly,
> that
> > > the program tries to construct a full-CI problem for all of the
> 38760
> > > roots despite 1 root being required?
> > >
> > > Best,
> > > Aleksandr
> > >
> > > _______________________________________________
> > > Columbus mailing list
> > > Columbus at lists.osc.edu
> > > https://lists.osu.edu/mailman/listinfo/columbus
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> http://lists.osu.edu/pipermail/columbus/attachments/20250324/06c785be/attachment.html
> >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > Columbus mailing list
> > Columbus at lists.osc.edu
> > https://lists.osu.edu/mailman/listinfo/columbus
> >
> > ------------------------------
> >
> > End of Columbus Digest, Vol 128, Issue 3
> > ****************************************
> >
> >
> >
>
> -----------------------------------------------------------
> Dr. Thomas Mueller
> Institute for Advanced Simulation (IAS)
> Juelich Supercomputing Centre (JSC)
>
> Phone: +49-2461-61-3175
> Fax: +49-2461-61-6656
> E-mail: th.mueller at fz-juelich.de
> WWW: https://urldefense.com/v3/__http://www.fz-juelich.de/jsc__;!!KGKeukY!299_IZMa5Vp7muhf8K6iPYwIexHTxOfp7tu7aD2I3kMWT9FVsBCo93QzOoP90-gSUOrqOiI7HdWjzaocX7jFJl0$
>
> JSC is the coordinator of the
> John von Neumann Institute for Computing (NIC)
> and member of the
> Gauss Centre for Supercomputing (GCS)
> -----------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/columbus/attachments/20250325/9b196eb6/attachment-0001.html>
More information about the Columbus
mailing list