[Columbus] Columbus Digest, Vol 128, Issue 3

Aleksandr Zaichenko alex.zai90 at gmail.com
Wed Mar 26 16:04:47 EDT 2025


Hi Thomas,

I used the colinp script for generation of the CI inputs and didn't modify
anything additionally in the ciudg inputs. It can mean that there is a bug
in the preparations script generating inconsistent / not initialized input
parameters or something is wrong in the cuidg memory management. I will try
to use different computational parameters and let you know, if I will fix
it.

For the SOCI, my goal was to perform non-perturbative variational SOC
initially (if I understand the given Columbus example with Eu1 correctly).
For the test case I tried to generate (10,10,10,10) roots for the odd
multiplicities and to perform this rassi like calculation, but
unfortunately due to this memory allocation trouble I couldn't do it and
tried to test just the ground state CI-calculation that failed too as you
can see. I have taken into account that for the non-perturbative iterative
SOC I have had to reduce problem size further, but the tests were failed,
therefore I asked you about it. It is anyway important to have the CI-only
calculation as basic non-correlated reference. Please let me know if I need
to know anything about the calculations setup.

About this actinides ground state yes. It is a very non-trivial system,
therefore we study it.

PS. Is there a way to print-out/define the Omega quantum number of the SO
states of the diatomic molecule?

Best,
Aleksandr

ср, 26 мар. 2025 г. в 15:42, Thomas Mueller <th.mueller at fz-juelich.de>:

> On Tue, 25 Mar 2025, Aleksandr Zaichenko wrote:
>
>    AO-MO trafo is fine.
>    DRT construction is fine
>
>    The ci part may indicate some overflows probably because some
>    variables are not initialized.
>
>    I'd suspect that you created with this CASCI calculation a
>    very special case which somehow crashes the memory management.
>
>    What might be a good idea is to simply retain say the lowest
>    two VOs and allow single and double excitations - this will
>    increase your csf space by  a factor of 4 - so no problem.
>    This is a standard case and should work; then remove the
>    doubles and check again, finally try the CASCI.
>
>    SOCI with CASCI(20o,6e) is not easy.
>    With actinides the ground state is not well-separated,
>    so you will likely need all spin multiplicities
>    (7,5,3,1). The basis set size is the same as for a
>    determinant basis. 6e in 40 spin orbitals
>    gives rise to 40*39*38*37*36*35/720  determinants
>    (and in this case also CSFs).
>    Also you will have to compute a (possibly large)
>    number of roots. Symmetry would help a lot here.
>
>    Thomas
>
>
>
>
> > Dear Thomas,
> >
> > Yes. It is weird in my opinion too. Initially I tried to compute the
> spin-orbit coupling in this active space and have encountered the problem,
> after
> > that I tried just to compute this septet ground state and met the same
> issue. I am not sure that the MCSCF is a good alternative for my case,
> however
> > I don't use the MR-CISD in parallel. I don't see any obvious problem in
> my calculation setups, moreover all beyond the active space is frozen and
> any
> > additional excitations are impossible. I can send you the logs or inputs
> or boths, if you want. Can I have something wrong with the installation or
> > something goes wrong in the program such as infinite loop arizing? I
> have really no idea how such memory demand can occur. Should I try something
> > else? This problem does not look good.
> >
> > Best,
> > Aleksandr
> >
> > вт, 25 мар. 2025 г. в 05:16, Thomas Mueller <th.mueller at fz-juelich.de>:
> >       On Mon, 24 Mar 2025, Aleksandr Zaichenko via Columbus wrote:
> >          Dear Aleksandr,
> >
> >          this looks quite weird. There is no way to produce a 1.2 TB
> diagonal
> >          integral file ( this would amount to a basis of
> >           approx. sqrt(1.2*10**12/(8))=38*10**4 ).
> >
> >          Since you have probably defined a reference space of 38760 CSFs,
> >          the program should not try to use the standard dsyevx/dsyevd
> eigensolver
> >          to extract the lowest root which would require (38760**2)*8
> bytes of
> >          memory plus say  20% overhead but instead use iterative
> reference space diagonalization.
> >          The alternative is to just start from the unit vector with the
> smallest
> >          diagonal matrix element - technically there is no difference.
> >
> >          CASCI is not really the target of the MR-CISD code - although
> of course
> >          one can use it. The only charming feature is that the iterative
> CASCI
> >          calculation is parallel (provided it is switched on).
> >
> >          BTW, for CASCI(20o,6e) without symmetry and S=0
> >          the size of the CSF space should be
> >          379050 CSFs (= 21!/(3!18!) * 21!/(4!17!) * 1/21
> >
> >          The alternative is to use the mcscf code, run a single macro
> iteration,
> >          switch off orbital rotations (which ought also switch off the
> explicit
> >          construction of the Hessian matrix components  B,C and M) and
> set to
> >          iterative solution of the CI problem. This should also force
> the AO-MO
> >          trafo step to just compute the all-internal MOs.
> >
> >          Beyond about 10**6 CSFs a CASCI calculation with MR-CISD is less
> >          useful, since  the formula tape construction is getting more
> and more
> >          dominant and  the matrix element construction is scalar (i.e.
> not fast
> >          but with sufficient patience you can go to 10**9 CSFs - but
> imagine
> >          running FCI with a general MR-CISD code is not that efficient).
> >          The MCSCF code is somewhat better positioned regarding this
> special
> >          application. Despite of all shortcomings DMRG (especially
> CheMPS2) does
> >          a good job here.
> >
> >          The documentation in $COLUMBUS/docs/fulldoc/ciudg.txt,
> >          $COLUMBUS/docs/fulldoc/mcscf.txt hopefully clarifies the various
> >          input options.
> >
> >          Best,
> >
> >         Thomas
> >
> >
> >       > Dear Hans, Yes. Correct. It is the MR-CISD program. Excitations
> in the virtual space are 0, moreover all
> >       > virtual orbitals as well as lower are frozen. The problem size
> from cidrtls is: total: 38760 and it seems
> >       > correct. The growth of the memory
> >       > Dear Hans,
> >       >
> >       > Yes. Correct. It is the MR-CISD program. Excitations in the
> virtual space are 0, moreover all virtual orbitals
> >       > as well as lower are frozen.
> >       >
> >       > The problem size from cidrtls is:
> >       >
> >       > total: 38760
> >       >
> >       > and it seems correct. The growth of the memory demand occurs in
> the diagint file up to 1.2 Tb and calculation
> >       > falls with SIGSEGV.
> >       >
> >       > It should not happen with the CI-problem of that dimension
> including only one root with SO, as I know. I can
> >       > share listings, if you will need it.
> >       >
> >       > Best,
> >       > Aleksandr
> >       >
> >       >
> >       >
> >       > On Mon, Mar 24, 2025 at 6:46 PM <columbus-request at lists.osc.edu>
> wrote:
> >       >       Send Columbus mailing list submissions to
> >       >               columbus at lists.osc.edu
> >       >
> >       >       To subscribe or unsubscribe via the World Wide Web, visit
> >       >               https://lists.osu.edu/mailman/listinfo/columbus 
> >       >       or, via email, send a message with subject or body 'help'
> to
> >       >               columbus-request at lists.osc.edu
> >       >
> >       >       You can reach the person managing the list at
> >       >               columbus-owner at lists.osc.edu
> >       >
> >       >       When replying, please edit your Subject line so it is more
> specific
> >       >       than "Re: Contents of Columbus digest..."
> >       >
> >       >
> >       >       Today's Topics:
> >       >
> >       >          1. CASCI extraordinary memory demand (Aleksandr
> Zaichenko)
> >       >          2. Re: CASCI extraordinary memory demand (Hans Lischka)
> >       >
> >       >
> >       >
>  ----------------------------------------------------------------------
> >       >
> >       >       Message: 1
> >       >       Date: Mon, 24 Mar 2025 15:36:14 -0400
> >       >       From: Aleksandr Zaichenko <alex.zai90 at gmail.com>
> >       >       To: columbus at lists.osc.edu
> >       >       Subject: [Columbus] CASCI extraordinary memory demand
> >       >       Message-ID:
> >       >               <CAAJgZKtK7jg7Vij3mJuNcCrShToEQrxkgJZfQ=
> BKkgUz6Qvc9A at mail.gmail.com>
> >       >       Content-Type: text/plain; charset="utf-8"
> >       >
> >       >       Dear Columbus Team,
> >       >
> >       >       I have a question about an ressources requirement issue
> with the CI module
> >       >       for the CASCI calculation. A calculation with active space
> CAS(6,20)
> >       >       without any external excitations (all lower orbitals are
> frozen, all higher
> >       >       orbitals are removed) contains 38760 CI-vectors (as should
> be) and I should
> >       >       get an CI-only solution for only one first root. Most QC
> programs do this
> >       >       calculation without any problem, but the Columbus CI
> module requires more
> >       >       than 1 Tb memory for that and it seems very demanding. Why
> is it so? Do I
> >       >       understand correctly, that the program tries to construct
> a full-CI problem
> >       >       for all of the 38760 roots despite 1 root being required?
> >       >
> >       >       Best,
> >       >       Aleksandr
> >       >       -------------- next part --------------
> >       >       An HTML attachment was scrubbed...
> >       >       URL: <
> http://lists.osu.edu/pipermail/columbus/attachments/20250324/b257d5cf/attachment-0001.html 
> >
> >       >
> >       >       ------------------------------
> >       >
> >       >       Message: 2
> >       >       Date: Mon, 24 Mar 2025 17:45:30 -0500
> >       >       From: Hans Lischka <hans.lischka at univie.ac.at>
> >       >       To: columbus at lists.osc.edu
> >       >       Subject: Re: [Columbus] CASCI extraordinary memory demand
> >       >       Message-ID: <
> bc527a53-1a8d-4bea-b08a-30e2d1f800d1 at univie.ac.at>
> >       >       Content-Type: text/plain; charset="utf-8"; Format="flowed"
> >       >
> >       >       Hi Aleksandr,
> >       >       I am not sure what kind of calculation you really did. The
> CI module is
> >       >       meant to do a MR-CISD calculation. I assume you took the
> CAS(6,20) as
> >       >       reference? How did you proceed further? The CI program
> will want to
> >       >       create all SD excitation from the 38760 configurations.
> Did you opt for
> >       >       that? This would explaining the 1TB of memory required.
> Look into the
> >       >       cidrtls file. There you will see the CI dimension. If you
> want the CASCI
> >       >       calculation you should input zero excitations (into the
> virtual space).
> >       >       Please let e know what you did.
> >       >       Best regards, Hans
> >       >
> >       >       On 3/24/2025 2:36 PM, Aleksandr Zaichenko via Columbus
> wrote:
> >       >       > Dear Columbus Team, I have a question about an
> ressources requirement
> >       >       > issue with the CI module for the CASCI calculation. A
> calculation with
> >       >       > active space CAS(6,20) without any external excitations
> (all lower
> >       >       > orbitals are frozen, all higher orbitals
> >       >       > Dear Columbus Team,
> >       >       >
> >       >       > I have a question about an ressources requirement issue
> with the CI
> >       >       > module for the CASCI calculation. A calculation with
> active space
> >       >       > CAS(6,20) without any external excitations (all lower
> orbitals are
> >       >       > frozen, all higher orbitals are removed) contains 38760
> CI-vectors (as
> >       >       > should be) and I should get an CI-only solution for only
> one first
> >       >       > root. Most QC programs do this calculation without any
> problem, but
> >       >       > the Columbus CI module requires more than 1 Tb memory
> for that and it
> >       >       > seems very demanding. Why is it so? Do I understand
> correctly, that
> >       >       > the program tries to construct a full-CI problem for all
> of the 38760
> >       >       > roots despite 1 root being required?
> >       >       >
> >       >       > Best,
> >       >       > Aleksandr
> >       >       >
> >       >       > _______________________________________________
> >       >       > Columbus mailing list
> >       >       > Columbus at lists.osc.edu
> >       >       > https://lists.osu.edu/mailman/listinfo/columbus 
> >       >       -------------- next part --------------
> >       >       An HTML attachment was scrubbed...
> >       >       URL: <
> http://lists.osu.edu/pipermail/columbus/attachments/20250324/06c785be/attachment.html 
> >
> >       >
> >       >       ------------------------------
> >       >
> >       >       Subject: Digest Footer
> >       >
> >       >       _______________________________________________
> >       >       Columbus mailing list
> >       >       Columbus at lists.osc.edu
> >       >       https://lists.osu.edu/mailman/listinfo/columbus 
> >       >
> >       >       ------------------------------
> >       >
> >       >       End of Columbus Digest, Vol 128, Issue 3
> >       >       ****************************************
> >       >
> >       >
> >       >
> >
> >       -----------------------------------------------------------
> >       Dr. Thomas Mueller
> >       Institute for Advanced Simulation (IAS)
> >       Juelich Supercomputing Centre (JSC)
> >
> >       Phone:  +49-2461-61-3175
> >       Fax:    +49-2461-61-6656
> >       E-mail: th.mueller at fz-juelich.de
> >       WWW:    https://urldefense.com/v3/__http://www.fz-juelich.de/jsc__;!!KGKeukY!0yZ1lNcWpwCzXW4W6ew2MCSzrCW1lFJ8gxK7hyOYfbW5CxLK3UUtFwbqOLNBlb7srwO2Kl-wNvPvRPYxGbNpjGo$ 
> >
> >       JSC is the coordinator of the
> >       John von Neumann Institute for Computing (NIC)
> >       and member of the
> >       Gauss Centre for Supercomputing (GCS)
> >       -----------------------------------------------------------
> >
> >
> >
>
> -----------------------------------------------------------
> Dr. Thomas Mueller
> Institute for Advanced Simulation (IAS)
> Juelich Supercomputing Centre (JSC)
>
> Phone:  +49-2461-61-3175
> Fax:    +49-2461-61-6656
> E-mail: th.mueller at fz-juelich.de
> WWW:    https://urldefense.com/v3/__http://www.fz-juelich.de/jsc__;!!KGKeukY!0yZ1lNcWpwCzXW4W6ew2MCSzrCW1lFJ8gxK7hyOYfbW5CxLK3UUtFwbqOLNBlb7srwO2Kl-wNvPvRPYxGbNpjGo$ 
>
> JSC is the coordinator of the
> John von Neumann Institute for Computing (NIC)
> and member of the
> Gauss Centre for Supercomputing (GCS)
> -----------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/columbus/attachments/20250326/6bc34c6a/attachment-0001.html>


More information about the Columbus mailing list