A fast and precise DFT wavelet code

Installation

A fast and precise DFT wavelet code
Jump to: navigation, search

The compilation and installation of BigDFT rely on the GNU standard building chain: ’configure’, ’make’, ’make install’. BigDFT can be used as an independent program (as described in this manual), or as a library, to be embedded in other softwares, like inside ABINIT.

Contents

Building the executables

There are some practicle examples of compilations on different architectures where you might find useful information on how the code has been compiled on different platforms and for different options.

Configure

The BigDFT build system is based on standard GNU autotools. The end user does not need to have the autotools package installed on his computer, the configure script provided in the BigDFT package will create the appropriate Makefile and set all the compilation options, like: the optimization level, the associated libraries to link with, and so on.

After the package has been untarred, the sources should be configured to the local architecture of the system. Thanks to the autotools, it is possible to generate several builds from the same source tree. It is advised to create a compilation directory, either inside or outside the source tree. Lets call this directory compile-gFortran for instance. One starts the configure from there 'source tree path'/configure.

General description of the options

One can tune the compilation environment using the following options:

  • FC: Specify the compiler (including MPI aware wrappers).
  • FCFLAGS, CFLAGS and CXXFLAGS: Specify the flags, like the optimisation flags, to pass to the compilers (default are {-g -O2} for GNU compilers).
  • --prefix=DIR: Specify your installation directory (/usr/local is default).
  • Linear algebra options:
    • --with-ext-linalg: Give the name of the libraries replacing BLAS and LAPACK (default = none specified). Use the -l before the name(s).
      • --with-ext-linalg-path: Give the path of the other linear algebra libraries (default = -L/usr/lib). Use the -L before the path(es).
    • --with-blacs: Use a parallelised version of Blas routines. If an argument is provided, this argument should provide the linking options (like -lblacs_provider. Use --with-blacs-path to specify a lookup path.
    • --with-scalapack: like --with-blacs for Scalapack.
    • --enable-dgemmsy: if the target machine supports SSE3 instruction set, this will compile an optimized version of dgemm operations for symmetric matrices.
  • Accelarators:
    • enable-cuda-gpu: Compile CUDA support for GPU computing.
      • --with-cuda-path: Give the path to the NVIDIA CUDA tools (default is /usr/local/cuda).
      • --with-nvcc-flags: Specify the flags for the NVIDIA CUDA Compiler.
    • --enable-opencl: Compile OpenCL support for GPU computing (compatible with --enable-cuda-gpu).
      • --with-ocl-path: Give the path to the OpenCL installation directory (default is /usr).
    • --enable-intel-mic: Compile Intel MIC support.
      • --with-intel-mic-libs: Link MIC executable with the given addional libraries.
  • Additional features:
    • --with-etsf-io: Use ETSF file format (binary based on NetCDF) for densities, potentials and wavefunction files.
    • --with-archives: Use compression (tar.bz2) for position files during geometry optimisation.
    • --enable-bindings: C and Python bindings are disabled by default.

The other available options can be browsed via the --help option. Some of them are listed here (and they can be of course combined with each other, when it does make sense):

  • --disable-mpi: Force not to use MPI during build. By default the configure will try to detect if the compiler has some native MPI capabilities. If not MPI will be automatically disabled.
  • --enable-debug: Creates a slower version of the executable in which any of the array allocated is filled by NaN after its boundaries. Useful to detect runtime errors during developments
  • --with-memory-limit=<mem>: Creates a version of the executable which abort the run if one of the processes allocates more memory than <mem> (in Gb). This version is not necessarily slower that traditional copilation.

Example of configure output

At the end of the configure script a summary is printed. It looks like that:

~/bigdft-trunk/tmp-gfortran$ ../configure \
  FC=mpif90.openmpi \
  FCFLAGS="-fbounds-check -O2 -Wall"
  [...]
  Basics:
  Fortran90 compiler:        mpif90.openmpi
  Fortran90 compiler name:
  Fortran90 flags:           -fbounds-check -O2 -Wall
  Fortran77 compiler:        gfortran
  Fortran77 flags:           -g -O2
  Linker flags:              -L$(top_builddir)/libXC/src -L$(top_builddir)/libABINIT/src
  Linked libraries:          -labinit -lxc    -llapack -lblas

  Build:
  Library ABINIT:            yes
  Library PSolver:           yes
  Library BigDFT:            yes
  Main binaries (cluster...):yes
  Minima hopping binary:     no
  atom and pseudo binaries:  no
  User documentation:        yes
  Devel. documentation:      yes / no

  Options:
  Debug version:             no
  With MPI:                  yes
  | Include dir.:
  | Linker flags:
  | Linked libraries:
  | MPI2 support:            yes
  With optimised conv.:      yes
  With Cuda GPU conv.:       no
  | NVidia Cuda Compiler:
  | Cuda flags:
  With OpenCL support:       no
  With dgemmsy support:      no
  With libXC:                yes
  | internal built:          yes
  | include dir.:            -I$(top_builddir)/libXC/src
  With libABINIT:            yes
  | internal built:          yes
  | include dir.:            -I$(top_builddir)/libABINIT/src
  With libS_GPU:             no
  | internal built:          no
  | include dir.:
  With ETSF_IO:              no
  | include dir.:           

  Installation paths:
  Source code location:      ..
  Prefix:                    /usr/local
  Exec prefix:               ${prefix}
  Binaries:                  ${exec_prefix}/bin
  Static libraries:          ${exec_prefix}/lib
  Fortran modules:           ${prefix}/include/
  Documentation:             ${datarootdir}/doc/${PACKAGE_TARNAME}

Now, let's look at some most common cases...

Use Intel MKL libraries

The Intel compiler is usually provided with native Lapack and Blas implementations, called the MKL libraries. To use them, the option to pass to configure is --with-ext-linalg.

  ../configure --with-ext-linalg="-lmkl_ia32 -lmkl_lapack"
  --with-ext-linalg-path="-L/opt/intel/mkl72/lib/32"
  --prefix=/home/caliste/usr FC=ifort

In this example, the --prefix option is not mandatory and is just provided to specify the destination directory for installation.

MPI compilation

MPI detection is enable by default and the current Fortran compiler (specified with FC) is tested with respect to MPI capabilities. MPI and MPI2 are supported. If MPI2 is not available a fallback has been implemented.

../configure FC=mpif90

If the Fortran compiler does not support MPI, a warning message is output by configure script. To remove this message, one needs to specify not to detect MPI capabilities with --disable-mpi option.

One can also pass all the options for the MPI link proceeding using the options --with-mpi-include, --with-mpi-ldflags and --with-mpi-libs (not recommended).

OpenCL compilation

Here is a example using the Intel Fortran compiler and OpenCL installed in /applications/cuda-3.2:

  ../../sources/bigdft-1.5.1/configure FC=ifort
  --enable-opencl --with-ocl-path=/applications/cuda-3.2

CUDA compilation

The compilation with CUDA currently required to compile the code with "second underscore". It is for the compiler to know how to link C and Fortran sources together. Here is a example using the Intel Fortran compiler and CUDA installed in /applications/cuda-2.2:

  ../../sources/bigdft-1.3.0-dev/configure
  FC=ifort FCFLAGS="-O2 -assume_2underscores"
  CC=icc CXX=icc CXXFLAGS="-O2 -I/applications/cuda-2.2/include/"
  CFLAGS="-O2 -I/applications/cuda-2.2/include/"
  --enable-cuda-gpu --with-cuda-path=/applications/cuda-2.2

NetCDF I/O

Here is a example using the Intel Fortran compiler, NetCDF installed in /applications/netcdf-3.6.3 and ETSF_IO compiled in a home directory:

  ../../sources/bigdft-1.5.1/configure FC=ifort --with-etsf-io
  --with-etsf-io-path=$HOME/usr
  --with-netcdf-path=/applications/netcdf-3.6.3

Compilation

Make

Make the package and create the ’bigdft’ executable, issuing make. The GNU option -jn is working with whatever value of n (tested up to 16).

Install

To install the package, issue make install. It will copy all files to the specified prefix (see configure). Additionaly, another installation path can be specified by providing a prefix option.

make install prefix=$HOME/usr

This example will install, binaries and documentation in $HOME/usr whatever the prefix specified at configuration time. The destination directory will be created automatically if not existing.

Clean

Clean the source tree of the ’make’ action by make clean.

The executables

BigDFT provides the following executables:

  • bigdft (previously called cluster): run DFT ground state calculations with or without geometry relaxations.
  • bigdft-tool (previously called memguess): read BigDFT inputs and provide an accurate estimation of memory requirement (for each CPU in case of MPI run). It can also do some simple jobs (see the --help option output):
bigdft-tool -a memory-estimation [options]:
    Performing memory estimation for a run of BigDFT.

bigdft-tool -a rotate [options]:
    Rotate the input file to use the smallest mesh possible using files

bigdft-tool -a convert-field FROM TO:
    Convert the given scalar field to another format, files FROM and TO
    are of the form <file.{etsf,cube}>.

bigdft-tool -a export-wf FILE:
    Export the compressed wavefunction from FILE to a scalar-field
    representation in Cube format.

bigdft-tool -a export-grid [options]:
    Export in XYZ format the positions of all grid points.

bigdft-tool -a atomic-wf [options]:
    Calculates the atomic wavefunctions of the first atom in the gatom
    basis.
  • NEB: run a NEB path search (requires to provide also NEB_driver.sh and NEB_include.sh).
  • splsad: to be explained.
  • frequencies: run a finite difference calculation to find vibrations for a molecule.
  • MDanaysis: browse the 'posout' files generated during a molecular dynamic to compute several quantities, like the radial distribution g(r).
  • BigDFT2Wannier and WaCo: to be explained.
  • bart: EXPERIMENTAL, ART implementation using BigDFT for the force calculation.
  • abscalc: EXPERIMENTAL, compute XANES spectrum.

Building a library

To avoid to create the binary executables, use --disable-binaries option.

The main subroutine of the BigDFT package is the call_bigdft routine. For a given set of input coordinates and input parameters it returns the total energy and the forces acting on the nuclei. The BigDFT.f90 main program calls the call_bigdft routine and can also do a geometry optimization by calling the geopt routine (which in turn calls again call_bigdft). For other standard applications other main programs exist. At present main programs to do a vibrational analysis, saddle point search and global optimization have been developed. Users are encouraged to write their own main programs for specific applications.

Building dynamic libraries

From version 1.7-dev.24, BigDFT can generate dynamic libraries, use the --enable-dynamic-libraries. This will generate a libbigdft-1.so file that can be used to link with.

Important notice: libtool is currently not used and support is restricted to Gfortran only, but other compilers and linkers will be available in the future.

Building the Poisson Solver library only

From version 1.7-dev.27, it is possible to compile only the Poisson Solver in a separate library, by using --disable-libbigdft in conjonction with --disable-binaries. The compilation will then enter in the PSolver/ subdirectory, compile libPSolver-1.a and stop.

A dynamic version of the library can be obtain using the --enable-dynamic-libraries configure option. For instance:

../configure FC=mpif90 FCFLAGS="-O2 -fopenmp" --disable-libbigdft --disable-binaries --enable-dynamic-libraries

will compile with make and install the following:

usr
├── include
│   └── poisson_solver.mod
└── lib
    ├── libPSolver-1.so -> libPSolver-1.so.7
    ├── libPSolver-1.so.7 -> libPSolver-1.so.7.0.27
    ├── libPSolver-1.so.7.0.27
    └── libPSolver-1.a

Note: additional modules are installed as the dependencies of the Poisson Solver but not mentioned here.

Dependencies

The Poisson solver depends on MPI. It also depends on a BLAS implementation for optimized data copy. It can be compiled with OpenMP.

Linking with the static library

Linking with libPSolver-1.a requires to provide all necessary dependencies on the linking line. These dependencies include some built in BigDFT package itself:

  • libflib.a, can be found in $(BIGDFT)/flib/src;
  • libwrappers.a, can be found in $(BIGDFT)/wrappers.

From the above configuration line, the required line to compile a static executable requires to add:

PSolver/src/libPSolver-1.a wrappers/libwrappers.a flib/src/libflib.a -llapack -lblas

Running the tests

BigDFT is provided with several test cases (that can also be studied as examples). They are located in the tests directory. To run the tests, after compilation, issue make check in this directory.

Some options

Depending on compilation options, these tests can be run with different parameters, like OpenMP, MPI, OpenCL,… These parameters are indeed options to set-up in the input.perf file, but can be tuned for all automatic tests by setting up some environnement variables.

Case of multi-CPU support

To run tests with MPI support, use the environment variable run_parallel as:

export run_parallel='mpirun -np 2'

For OpenMP, the standard OMP_NUM_THREADS environment variable has to be set.

Case of OpenCL support

If BigDFT has been compiled with OpenCL support, to run tests on GPU, use the environment variable run_ocl as:

export run_ocl='on'

One can force to use OpenCL on a different hardware than the GPU by using:

export run_ocl='CPU'

It is possible to select the platform or the device to run on with:

export ocl_platform='NVIDIA'
export ocl_devices='K20'

Directory structure of tests subdir

libs

It tests some specific parts of BigDFT like the fft, the xc calculation… Enter each subdirectories and type make check there.

DFT/cubic, DFT/linear, DFT/postSCF and overDFT

These directories have a specific behaviour. Type make there for an overview:

==============================================================================
This is a directory for tests. Beside the 'make check'
one can use the following commands:
 make in:           generate all input dirs.
 make failed-check: run check again on all directories
                    with missing report or failed report.
 make complete-check: for developers, makes long and extensive tests.
 make X.in:         generate input dir for directory X.
 make X.check:      generate a report for directory X
                    (if not already existing).
 make X.recheck:    force the creation of the report in directory X.
 make X.clean:      clean the given directory X.
 make X.diff:       make the difference between output and the reference
                    (with the environment variable DIFF)
 make X.updateref   update the reference with the output
                    (prompt the overwrite)

Use the environment variable run_parallel
    ex: export run_parallel='mpirun -np 2'  
==============================================================================

It will perform tests for several basic capabilities of BigDFT with respect to the main SCF loop or post SCF treatments.

tutorials

Like the DFT directories, tests are run with make X.check and friends. These tests correspond to the different tutorials.

Personal tools