# The Solver Package

The BigDFT Package is linked with a Accurate and Efficient Poisson Solver written on the basis of Interpolating Scaling functions. Since few years this solver is also available as a separate Software Package. In this way it can be linked and used in other codes.

We explain here the instruction for compilation and linking of the Solver as a separate package.

## Contents |

# Download of the Solver

The development version of the package can be downloaded (at present) together with the other packages of the BigDFT suite:

bzr branch lp:bigdft

will provide the required sources. Actually, only the packages to which the Solver depend are needed for compilation. In the future the package will be versioned in a separate repository. After the solve is downloaded you may proceed with its compilation.

# Compilation and Installation

The Solver itself can be conceived as a package suite, as it depends from other packages, in particular FUTILE package. Therefore, the same installer script that is used for BigDFT can be used here, but only for the Solver. From the Build directory, the main command that has to be given is

<path-to-the-sources>/Installer.py build psolver <some-other-options-here>

The complete options of the `Installer.py` script can be found in the BigDFT installation page. We here recall the basic steps.

## Configure line

If you have clues of the configure line you should use, you may install the package with a syntax that is very similar to the traditional configuration of autotools.
For example, let us write the configure line for a installation on a platform that has `gfortran`, MPI libraries and CUDA installed:

<path-to-sources>/Installer.py build psolver -c "FC=mpif90" "FCFLAGS=-O2 -fopenmp" "--enable-cuda-gpu"

You should find a prompt similar to the following:

Parsing arguments with method argparse... Configuration chosen for the Installer: Hostname: pilipili Source directory: <path-to-sources> Compiling from a branch: True Build directory: <path-to-binaries> Action chosen: build Verbose: True Jhbuild baseline: <path-to-sources>/jhbuild.py -f buildrc Configuration options: Source: Environment variable 'BIGDFT_CONFIGURE_FLAGS' Value: ' "FC=mpif90" "FCFLAGS=-O2 -fopenmp" "--enable-cuda-gpu" ' Do you want to continue (Y/n)?

Then you answer what you prefer to do at the command line prompt. If you answer `y` or type on enter the installation procedure will start.

Note: when compiling the code from a development branch, you should have autoconf installed in the platform. If you do not have them you should install from a tarball.

After a successful installation you should find the following message:

*** success *** [3/3] -------------------------------------------------- Thank you for using the Installer of BigDFT suite. The action considered was: build SUCCESS: The Installer seems to have built correctly psolver bundle All the available executables and scripts can be found in the directory "<path-to-binaries>/install/bin" The suite has been built from a single configure line. Your used configuration options have been saved in the file 'buildrc'. Such file will be used for next builds, you might also save it in the 'rcfiles/'. Directory of the source for future use. The name might contain the hostname.

The interesting directories of the Poisson Solver package are then in the <path-to-binaries>/install/ directory. In particular the libraries are in the `libs/` subdirectory of this path. Which brings us to the next section

# Linking the Solver with External Codes

As a library, once installed the Solver can be linked with other codes by adding simple instructions to the linker of the external executable. To identify the suitable instruction to be passed for a particular installation, there is the `link` command of the Installer. This operation has also been presented in the BigDFT installation page. The command to be launched is very simple.
From the installation path (where the install/ directory is) just type:

<path-to-sources>/Installer.py link psolver

And after answering "y" to the prompt (you may skip the propt by adding the -y option in the command line above) you will find a message similar to:

--------- Linking line to build with package "psolver": -I<path-to-binaries>/install/include -L<path-to-binaries>/install/lib -lPSolver-1 -lfutile-1 -lcudart -lcublas -lcufft -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -llapack -lblas -lyaml -ldl -lrt -lfutile-1 -lcudart -lcublas -lcufft -lscalapack-openmpi -lblacs-openmpi -lblacsF77init-openmpi -llapack -lblas -lyaml -ldl -lrt --------------------------------------------------

Which is the result providede by the CUDA-based installation presented above.
This line can be used to create an executable linked with the `libPsolver-1.a` library.
As this program is made by Fortran modules it should be compiled with the same compiler that will be used for the external executable.

Note: this action requires the presence of the `pkg-config` program in you system' s installation. On traditional linux-based platforms this can be easily found, whereas there might be problems in finding this program in the default configuration on Mac OS systems.

## A simple test program which links with the solver

We now have all the information which are needed to call the Poisson Solver from a separate program. Let us now try to do it explicitly, such as to make also a small overview in the API of the solver. Let us consider the main steps in the usage of the Electrostatic Solver.

### The basic objects

The Solver works in real space, therefore it performs the job on a

input/output array, that we will indicate asrhov

`rhoV` correspons to the electrostatic density as input, and the
electrostatic potential as output.
We will see which are the sizes of this array.

coulomb_operator

Poisson_Solver

use Poisson_Solver [...] type(coulomb_operator) :: pkernel

clearly this object has to be initalized to define the particular system in which the array ` rhoV ` is defined.
The initialization of the `pkernel` variable is separated in two steps.
The first step is associated to the `pkernel_init` function

use dictionaries, dict_set => set integer :: mpirank,mpisize !MPI processes type(dictionary), pointer :: inputs !input parameters real(dp) :: alpha,beta,gamma !possible angles of the box axis integer, dimension(3) :: ndims !sizes of the simulation domain real(dp), dimension(3) :: hgrids !uniform mesh spacings in the three directions [...] call f_lib_initialize() [...] call dict_init(inputs) [...] pkernel=pkernel_init(mpirank,mpisize,inputs, & !setup geocode,ndims,hgrids,& !geometry alpha_bc=alpha,beta_ac=beta,gamma_ab=gamma) !optional arguments call dict_free(inputs) [...]

All the arguments are self-explanatory (we however remind to the developer's documentation), with the exception of the input parameters `input`.

This is a `FUTILE` dictionary that contains the (optional) specifications
of the input parameters that has to be passed to correctly initialize the `pkernel` object.
Such dictionary may be filled from a string written in `YAML` specifications (like for example an input file) or by initializing separately the components.
We will dedicate a separate section to the description of such parameters.

For a CPU-only Vacuum Boundary Conditions calculation it is enough to leave the dictionary empty, just initializing it with the `dict_init` routine.
Th dictionary may be then freed after the call to the `pkernel_init` function.

### Ready, steady, go

We are still not yet ready to use the Electrostatic Solver.
the `coulomb_operator` structure has to allocate the required internal arrays needed to perform the basic operations.
This is done by the following routine

call pkernel_set(pkernel)

that can be performed at a later time with respect to `pkernel`,
which is a function mainly associated to the reading of the input parameters.

After this routine has been called we may call the main package routine

call Electrostatic_Solver(pkernel,rhov,ehartree=eh) !ehartee is among the optional args

That would perform the calculation.

What about the size of the array `rhoV`? for a single-MPI run, there is of course no ambiguity: the array rhoV has a size which should satisfy the

test = size(rhoV) == product(ndims) !this should be true for mpisize == 1 in pkernel_init

However, for a MPI-parallelized calculation the Solver performs an internal splitting of the density (and potential) along the third dimension (of size `ndims(3)`. You have therefore two choices

- Pass to the solver the *entire* density for each mpi process and retrieve the *entire* potential at the output. Such operatio is more intuitive; the size of the rhoV array is still the one of the complete simulation domain. However, this operation requires one extra gathering communication among the solver processes. For this reason, this is *not* the default. To employ this distribution, there are two ways:

1) You call `pkernel_init` from a dictionary that has the `['setup']['global_data']` key set to true:

[...] !after dict_init of course call dict_set(input//'setup'//'global_data',.true.) [...]

2) After the `pkernel_set` but before the `Electrostatic_Solver`. you may switch distribution by calling

call PS_set_options(pkernel,global_data=.true.)

This is possible as the `rhoV` distribution is among the few options that can be also modified at runtime (i.e. after the iniitalization and allocation of the `pkernel` instance of the `coulomb_operator` derived type.

- Pass to the Electrostatic routine only the slices that are needed for the calculation and retrieve the potential in the same distribution.

Such option is the default option for a MPI initialization of `pkernel`.
the user is therefore responsible for the initialization of rhoV. The convention adopted is the following:

*the xy-planes ofrhoVgo from the pointspkernel%grid%istart+1topkernel%grid%istart+pkernel%grid%n3pof the third directions*

It might appear wise therefore to add to your code a check, prior to the call to ` Electrostatic_solver`, like the following:

use f_utils [...] call f_assert(size(rhoV) >= pkernel%mesh%ndims(1)*pkernel%mesh%ndims(2)*kernel%grid%n3p,& 'the size of rhoV is not big enough for a distributed calculation')

which would make the code stop in cal of erroneous allocation.

If needed, one might always reconstruct the arrays in the global distribution from the distributed one by performing a calls to the `PS_gather` routine:

call PS_gather(pkernel,src=rhoV,dest=rhov_global) !here the size of the destination is the product of ndims

### Please leave the place as you would like to find it

A well-educated person always cleans his garbage when not needed anymore. Therefore the routines

call pkernel_free(pkernel) !whan not needed anymore [...] call f_lib_finalize() !at the end of the program

have to be called eventually.

### Summary

The impatient developer (oxymoron?) might find here all the explanation written such that a cut-paiste can be done for the inclusion in a test program:

use Poisson_Solver use dictionaries, dict_set => set use yaml_output, only: yaml_map [...] integer :: mpirank,mpisize !MPI processes type(dictionary), pointer :: inputs !input parameters real(dp) :: alpha,beta,gamma !possible angles of the box axis integer, dimension(3) :: ndims !sizes of the simulation domain real(dp), dimension(3) :: hgrids !uniform mesh spacings in the three directions type(coulomb_operator) :: pkernel type(PSolver_energies) :: energies !the energies derived type contains more information about the various components when applicable [...] call f_lib_initialize() !initialize futile modules [...] !decide the geometry values (ndims, hgrids, geocode, angles (radians)) call dict_init(inputs) !default values (inputs={}) !override if willing (example of GPU) if (gpu) &!in python it would be inputs[’setup’][’accel’]=’CUDA’ call dict_set(inputs//’setup’//’accel’, ’CUDA’) !if you want complete potential for each process anytime call dict_set(inputs//'setup'//'global_data',.true.) !set the kind of solver and communicator kernel=pkernel_init(mpirank,mpisize,inputs, & !setup geocode,ndims,hgrids,& !geometry alpha_bc=alpha,beta_ac=beta,gamma_ab=gamma) !optional !free input variables if not needed anymore call dict_free(inputs) [...] !do other stuff here call pkernel_set(pkernel,verbose=.true.) !allocate buffers (verbosely if you like) ![...] from this point you need to allocate (and fill rhoV array as you like) !transform density in potential call Electrostatic_Solver(pkernel,rhoV,energies) !this is like print (little advertisement of yaml emitter in FUTILE) call yaml_map('The hartree energy of this run is',energies%hartree) ![...] end of usage of the solver call pkernel_free(pkernel) !release buffers call f_lib_finalize()

## Implicit solvation

To set up a nonvacuum calculation first we need to set the proper dictionary keys:

call dict_set(inputs//'environment'//'cavity','soft-sphere') ! or sccs call dict_set(inputs//'environment'//'gps_algorithm','PCG') ! or SC call dict_set(inputs//'environment'//'delta',delta) ! only for soft-sphere [...] similar dictionary setting for all input variables in environment class

"cavity" can be *soft-sphere* or *sccs* depending on the implicit solvation model we want to adopt. In both we build up the dielectric cavity starting from the initial atomistic system. Other keys are explained above and all of them can be setup here modifying the dictionary.

Afterwards, as done for a vacuum calculation, we have to set up the poisson kernel. There are several methods to do that:

- We can call the following subroutines in case of the soft-sphere model to generate the dielectric cavity epsilon(r) and all working arrays:

call pkernel_set_epsilon(pkernel,nat=nat,rxyz=rxyz,radii=radii)

We just need some informations of our atomistic system like the total number of atoms (nat), their positions (rxyz(1:3,nat)) and their radii.

- If you have a given cavity epsilon(r) (on the same real space grid of the input charge) build up with other methods, you can directly set up all working arrays (needed by the PCG or SC solver of the generalized Poisson equation):

call pkernel_set_epsilon(pkernel,eps=eps)

- If you have both the cavity and the working arrays you can pass all of them to the solver. In the case of PCG solver you need these three vectors:

call pkernel_set_epsilon(pkernel,eps=eps,oneosqrteps=oneosqrteps,corr=corr)

Here eps represent the dielectric cavity epsilon(r), oneosqrteps the inverse of its square root, and corr the vector q(r) of equation (16) in Fisicaro J. Chem. Phys. 144, 014103 (2016). All these vectors have to be on the same real space grid of the input charge.

In the case of SC solver you need to pass these vectors:

call pkernel_set_epsilon(pkernel,eps=eps,oneoeps=oneoeps,dlogeps=dlogeps)

where oneoeps is the inverse of epsilon and dlogeps the derivative of its natural logarithm.

### Summary

Code example to solve the generalized Poisson equation for nonvacuum environments where a dielectric cavity is present.

use Poisson_Solver use dictionaries, dict_set => set use yaml_output, only: yaml_map [...] integer :: mpirank,mpisize !MPI processes type(dictionary), pointer :: inputs !input parameters real(dp) :: alpha,beta,gamma !possible angles of the box axis integer, dimension(3) :: ndims !sizes of the simulation domain real(dp), dimension(3) :: hgrids !uniform mesh spacings in the three directions type(coulomb_operator) :: pkernel type(PSolver_energies) :: energies !the energies derived type contains more information about the various components when applicable integer :: nat ! atom number to build up the soft-sphere cavity real(dp), dimension(3,nat) :: rxyz ! atom coordinates real(dp), dimension(nat) :: radii ! atom radii for the soft-sphere cavity [...] call f_lib_initialize() !initialize futile modules [...] !decide the geometry values (ndims, hgrids, geocode, angles (radians)) call dict_init(inputs) !default values (inputs={}) !override if willing (example of GPU) if (gpu) &!in python it would be inputs[’setup’][’accel’]=’CUDA’ call dict_set(inputs//’setup’//’accel’, ’CUDA’) !if you want complete potential for each process anytime call dict_set(inputs//'setup'//'global_data',.true.) !to set up a nonvacuum calculation call dict_set(inputs//'environment'//'cavity','soft-sphere') ! or sccs call dict_set(inputs//'environment'//'gps_algorithm','PCG') ! or SC call dict_set(inputs//'environment'//'delta',delta) ! only for soft-sphere [...] similar dictionary setting for all input variables in environment class !set the kind of solver and communicator pkernel=pkernel_init(mpirank,mpisize,inputs, & !setup geocode,ndims,hgrids,& !geometry alpha_bc=alpha,beta_ac=beta,gamma_ab=gamma) !optional !free input variables if not needed anymore call dict_free(inputs) [...] !do other stuff here ! to generate the dielectric cavity epsilon(r) and all working arrays select case() case(1) ! soft-sphere model call pkernel_set_epsilon(pkernel,nat=nat,rxyz=rxyz,radii=radii) case(2) ! If you have a given cavity \epsilon(r) build up with other methods, you can directly set up all working arrays for PCG or SC call pkernel_set_epsilon(pkernel,eps=eps) case(3) ! if you have both the cavity and the working arrays you can pass all of them to the solver ! case PCG solver call pkernel_set_epsilon(pkernel,oneosqrteps=oneosqrteps,corr=corr,eps=eps) ! case SC solver call pkernel_set_epsilon(pkernel,eps=eps,oneoeps=oneoeps,dlogeps=dlogeps) end select ![...] from this point you need to allocate (and fill rhoV array as you like) !transform density in potential call Electrostatic_Solver(pkernel,rhoV,energies) !this is like print (little advertisement of yaml emitter in FUTILE) call yaml_map('The hartree energy of this run is',energies%hartree) ![...] end of usage of the solver call pkernel_free(pkernel) !release buffers call f_lib_finalize()