Main Page

From Eigenvalue soLvers for Petaflop Applications (ELPA) Library: Distribution, Information and Documentation

Jump to: navigation, search


"Eigenvalue soLvers for Petaflop Applications" (ELPA) Library: Distribution, Information and Documentation

ELPA is a Fortran-based high-performance computational library for the (massively) parallel solution of symmetric or Hermitian, standard or generalized eigenvalue problems.

This is the kind of eigenvalue problem which is frequently encountered in Electronic Structure Theory (solution of Schrödinger's Equation or variants thereof), but also in many other fields. Typically, the solution effort scales as O(size^3), where "size" is a measure of the system size, for example the dimension of the associated matrices or the number of required eigenvalue/eigenvector pairs (less than or equal to the matrix dimension). Thus, an algebraically exact solution of the eigenproblem may quickly become the bottleneck in a practical application.

On a single CPU, efficient libraries such as Lapack in combination with tuned, vendor-specific Basic Linear Algebra Subroutines (BLAS) will take care of this problem for you.

However, a parallel solution of the same problem may be inevitable in large cases due to speed or simple memory reasons. ELPA is an effort to provide such a solution in the most efficient way available.

Who created ELPA?

The original ELPA Library was created by the "Eigensolvers for Petaflop Applications" (ELPA) consortium, a collaborative effort supported by the German Federal Ministry of Education and Research (BMBF), grant number 01IH08007.

Where is ELPA published / how can I acknowledge ELPA?

There are currently two references that should be cited when using or referring to ELPA:

1) An account of some core improvements in ELPA is given in the following article:

 T. Auckenthaler, V. Blum, H.-J. Bungartz, T. Huckle, R. Johanni, L. Krämer, B. Lang, H. Lederer, and P. R. Willems:
 "Parallel solution of partial symmetric eigenvalue problems from electronic structure calculations"
 Parallel Computing 37, 783-794 (2011). 
 A preprint (pdf) can be found here.

2) A complete overview of the library appeared in late 2013:

Andreas Marek, Volker Blum, Rainer Johanni, Ville Havu, Bruno Lang, Thomas Auckenthaler, Alexander Heinecke, 
Hans-Joachim Bungartz, and Hermann Lederer: 
"The ELPA Library - Scalable Parallel Eigenvalue Solutions for Electronic Structure Theory and Computational Science Scientific Highlight for December, 2013.

We expect that a journal version of this "Highlight" article will appear in the course of 2014.

Much of the early ELPA work happened as a result of / in direct connection with the FHI-aims general-purpose all-electron electronic structure code, developed at the Fritz-Haber-Institute of the Max-Planck-Society.

How does ELPA work?

ELPA is a set of Fortran subroutines (module) that can be compiled as a separate library or together with an application of choice. The compilation of ELPA itself should be more or less trivial with any Fortran compiler around and standard optimizations. The free gfortran or g95 compilers should work. However, we would strongly recommend more efficient vendor-dependent compilers (Intel Fortran, The Portland Group Fortran, IBM xlf, etc. ) for actual production work. (Please note that different compilers could have different bugs at high optimization levels. Thus, we recommend to cross-check numerical results with lower-level optimizations and/or more than one compiler, especially in the beginning phase - just to be on the safe side.)

Once compiled, ELPA library routines can be linked to from C, C++, Fortran etc. code alike.

ELPA works as a "drop-in enhancement" for Scalapack-based infrastructures (arguably the de facto standard for high-performance parallel linear algebra). Thus, ELPA is not independent of this infrastructure, but rather builds on it. Necessary prerequisite libraries for ELPA (often already provided by HPC vendors) include:

- Basic linear algebra subroutines (BLAS)

- Lapack

- Basic linear algebra communication subroutines (BLACS)

- Scalapack

In addition, you will need a working "Message Passing Interface" (MPI) library for parallel execution.

ELPA is an MPI-only implementation, i.e., no hybrid parallelization (MPI/OpenMP) is assumed. It will thus work both in a single-node, shared memory environment, as well as large clusters of separate nodes. To date, tests have been conducted up to approx. 262,000 processor cores concurrently (on the JUGENE IBM BlueGene/P machine).

More detailed information on how to use ELPA can be found / will be collected on this page.

What is the license for ELPA, and what are my obligations?

We would like to keep ELPA Open Source, accessible by any applications. At the same time we wish to ensure that modifications to the ELPA library itself will be contributed back (or, at least, distributed in source form under the same license terms that we offer) to the wider ELPA infrastructure. Assembling ELPA was a significant amount of work over a long period of time - we hope that you will respect our requirement.

The chosen license is thus a variant of the "Lesser General Public License" (v3), but with the added modification that source code redistribution MUST keep the additional permissions of the LGPL v3 intact. Please refer to the "COPYING" directory in the distributed ELPA library version for exact information.

If you believe that ELPA would be useful for you but these license terms are not enough, we would like to hear from you. Our intention is to make the technology / knowledge available in a way that is fair and useful for everyone.

So how can I get ELPA?

The ELPA code is hosted in a "git" repository, and can be accessed by cloning this repository. This means that anyone who has a version of ELPA will automatically be able to contribute back to the development in a simple way. Please do so in a careful and coordinated fashion, but that said, we strongly encourage everyone to contribute back fixes, improvements or enhancements that they need.

Initial access to the git repository can be obtained through the "Git Access Registration" link in the top left corner of this page. Follow the steps to enter the necessary registration data. We will then provide you with a git account and the necessary information to obtain the code.

In addition, it would help us if you send a short email to the address:

to inform us - not because we are so data hungry, but simply because access to a git repository is login-based. Having an established communication link always helps in an actively developed project.

Which other (similar) efforts are out there for the massively parallel solution of eigenvalue problems by dense linear algebra?

There are a number of projects that we are aware of that are dealing with this problem, and we will be happy to add links to more as we become aware of them:

- Scalapack

- SLEPc for large scale sparse eigenvalue problems


- Elemental

- and perhaps many others ... (please let us know)

Please do not ask us or expect us to be able to provide detailed usage recommendations or performance comparisons for all these projects. We are happy to acknowledge our colleagues and learn from them and vice versa. However, a complete, general evaluation of different projects including the detailed settings that different libraries may require is well beyond our means (time and manpower). What we are happy to state is that we hope we did the best we could with ELPA on the platforms available to us, and that some work done with ELPA would simply not have been possible in the same way otherwise to our knowledge.

Of course, anyone is welcome and encouraged to try and provide benchmarks / feedback for their pet projects.

Category tree

Editing this Wiki? Consult the User's Guide for information on using the wiki software.

Personal tools