A fast and precise DFT wavelet code

Run a calculation

A fast and precise DFT wavelet code
Jump to: navigation, search

Contents

Estimating the memory usage: bigdft-tool

Before running BigDFT, it is recommended to run the bigdft-tool program. If it runs correctly all input files are validated (this routine needs a posinp.xyz or a posinp.ascii, and does not accept a list_posinp file). It then allows to estimate the required memory and to find an optimal number of MPI processes for a parallel run. For good load balancing each MPI process should roughly treat the same number of orbitals.

Here is an example of a memory estimation with 8 processes for the carbon cubic test of BigDFT:

caliste@myrte:~/local/bigdft-master/tests/DFT/cubic/C$ ll
-rw-r----- 1 caliste L_Sim  1017 févr. 14 14:29 input.dft
-rw-r----- 1 caliste L_Sim    21 févr. 14 14:29 input.occ
-rw-r----- 1 caliste L_Sim    51 févr. 14 14:29 posinp.xyz
caliste@myrte:~/local/bigdft-master/tests/DFT/cubic/C$ bigdft-tool -n 8

will ouput something like:

---
  #-------------------------------------------------------- Estimation of Memory Consumption
 Memory requirements for principal quantities (MiB.KiB):
   Subspace Matrix                     : 0.1 #    (Number of Orbitals: 4)
   Single orbital                      : 0.202 #  (Number of Components: 25808)
   All (distributed) orbitals          : 0.605 #  (Number of Orbitals per MPI task: 1)
   Wavefunction storage size           : 2.977 #  (DIIS/SD workspaces included)
   Nonlocal Pseudopotential Arrays     : 0.59
   Full Uncompressed (ISF) grid        : 7.882
   Workspaces storage size             : 0.595
 Memory requirements for principal code sections (MiB.KiB):
   Kernel calculation                  : 22.822
   Density Construction                : 27.794
   Poisson Solver                      : 18.764
   Hamiltonian application             : 28.213
 Estimated Memory Peak (MB)            :  28

Values are given per MPI process.

The bigdft-tool program also prints out the number of orbitals and how many orbitals are treated by each MPI process. On a parallel machine with a high performance interconnect network one can choose the number of MPI processes nproc equal to the number of occupied orbitals, i.e. each MPI process treats one orbital. On machines with slower networks each MPI process should have at least 2 to 4 orbitals.

  #---------------------------------------------------------------------- Occupation numbers
 Total Number of Electrons             :  4
 Spin treatment                        : Averaged
 Orbitals Repartition:
   MPI tasks  0- 3                     :  1
   MPI tasks  4- 7                     :  0

In this example, there are only 4 orbitals, so MPI processes 4 to 7 will don't treat any orbital, but they will be used when working on potentials.

Single point or geometry optimization: bigdft

Simple BigDFT run

The MPI version is executed on most machines with the mpirun command followed by the name of the executable which is bigdft in our case. The treatment of each orbital can be speeded up by using the mixed MPI/OpenMP implementation where each MPI processes uses several OpenMP threads to do the calculations for its orbitals faster. The OpenMP is simply activated by compiling the program with an OpenMP flag and by specifying the number of OpenMP threats by export OMP_NUM_THREADS=4 if 4 threads are for instance desired.

Example of command to run BigDFT on a 8 cores SMP machine, using 4 MPI processes and 2 threads per process:

OMP_NUM_THREADS=2 mpirun -np 4 bigdft > log

BigDFT requires only the description of the atomic positions, i.e. the posinp.{xyz,ascii} file. All other input files (pseudo-potentials or input parameters) are not mandatory and default values are used, here is the list of optional files for a single point run:

Changing default input parameters

When bigdft-tool or bigdft are run, they will generate default.* files with the input parameters that are used for the run. They contain the default values.

To change the default values, one can rename a default.* file to input.* file and edit it. See category:input files list for extended details.

Running a job with a naming scheme

By providing an additional argument to BigDFT, one can select a naming scheme for the run. Then, input and output files will use this prefix.

This example is using a naming boron2x4:

$ ll
-rw-r----- 1 caliste L_Sim   279 févr. 22 16:01 default.kpt
-rw-r----- 1 caliste L_Sim  1017 févr. 14 14:29 boron2x4.dft
-rw-r----- 1 caliste L_Sim    51 févr. 14 14:29 boron2x4.xyz
$ mpirun -np 2 bigdft boron2x4
$ ll
drwxr-x--- 1 caliste L_Sim  4096 févr. 22 16:01 data-boron2x4
-rw-r----- 1 caliste L_Sim   279 févr. 22 16:01 default.kpt
-rw-r----- 1 caliste L_Sim  1017 févr. 14 14:29 boron2x4.dft
-rw-r----- 1 caliste L_Sim    51 févr. 14 14:29 boron2x4.xyz

Miscellaneous output of BigDFT

BigDFT generates files (like output wavefunctions,…) in a subdirectory called data or data-{namingScheme} in case of naming scheme.

The BigDFT program monitors during a run the memory utilization and the time spent in various subroutines. Detailed information is written in the files data/malloc.prc and data/time.prc. At the end the program checks whether the number of deallocations was equal to the number of allocations and whether the total memory went back to zero. If this should not be the case please send a bug report to the developers of BigDFT.

  1. J. P. Perdew and Alex Zunger, "Self-interaction correction to density-functional approximations for many-electron systems", Phys. Rev. B 23, 5048–5079 (1981)
Personal tools