bemfmm

Extreme scale FMM-accelerated boundary integral equation solver for wave scattering.


Project maintained by ecrc Hosted on GitHub Pages — Theme by mattgraham

BEMFMM GitHub version License: MIT

Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

BEMFMM (https://ecrc.github.io/bemfmm/) is an extreme-scale Fast Multipole Method (FMM)-accelerated Boundary Element Method (BEM) parallel solver framework. It is a boundary integral equation solver for wave scattering suited for many-core processors, which are expected to be the building blocks of energy-austere exascale systems, and on which algorithmic and architecture-oriented optimizations are essential for achieving worthy performance. It uses the GMRES iterative method and FMM to implement the MatVec kernel. The underlying kernels are highly optimized for both shared- and distributed-memory architectures. The solver framework features optimal architecture-specific1 and algorithm-aware partitioning, load balancing, and communication reducing mechanisms. To this end, BEMFMM framework provides a highly scalable FMM implementation that can be efficiently applied to the computation of the Helmholtz integral equation kernel. In particular, it deals with addressing the parallel challenges of such application, especially at extreme-scale settings, with emphasis on both shared- and distributed-memory performance optimization and tuning on emerging HPC infrastructures.

The Underlying FMM Implementation

Image of the implemented FMM

System Workflow

Image of BEMFMM workflow

Requirements

The repository includes LAPACK, TBB, and ParMETIS, and the default Makefile links to these. Thus, you may not need to install yours, you can just use the ones that are already included herein. However, if you have a better implementation that you wish to link to, just install it on your software environment and directly link to it.

Having said that and since the repository already included the required external libraries, the minimum requirements to run BEMFMM are:

Please have these three dependencies configured and installed on your system before running the solver code.

Compiling and Linking

Edit make.inc file to include all of your installed dependencies. The default ones are set to GNU GCC compiler with MPICH. If you have these two configured and installed on your system, then you may not need to edit the make.inc file. Anything that you do not want to include in the make, just comment it out in the make.inc. The Makefile, on the other hand, is dynamic, therefore, you do not need to change it. All of your changes must be directed to the make.inc file only. Even if you want to add additional compiler’s flags, use USERCXXFLAGS, USERLIBS, USERINCS variables in the make.inc to include all of your flags. Once you edit the make.inc file, you can just do:

make clean
make all

make should generate an executable binary file called: bemfmm_test_mpi. You can run it directly with mpirun executable command. Please provide your command-line arguments. To learn about all of the available command-line arguments supplemented in our solver code, use -h or --help, which lists all of the available command-line arguments.

Running Test Cases

To give you a flavor of the excepted outputs, you can use: make test_serial, for serial execution, or make test_parallel, for parallel execution. Note: You may need to add TBB library path to your LD_LIBRARY_PATH, before you run the executable. To do so, run the following bash command:

export LD_LIBRARY_PATH="TBB/lib:$LD_LIBRARY_PATH"

The example herein assumes that you are using the TBB implementation provided with BEMFMM. Hence, you may want to replace TBB/lib with your TBB library path.

Tested Architectures

Here is a list of the systems in which we ran BEMFMM: [Note: For additional information, please read the supplementary martial document of the SISC paper (https://epubs.siam.org/doi/suppl/10.1137/18M1173599).]

Control Parameters

Image of the Control Parameters

Concluding Remarks

The main focus of this software lies in the development of a highly scalable FMM that can be efficiently applied to the computation of the Helmholtz integral equation kernel. Particularly, our framework deals with addressing the parallel challenges of such application, especially at extreme scale settings, with emphasis on both shared- and distributed- memory performance optimization and tuning on emerging HPC infrastructures. We believe that increasing the convergence rate of the iterative solver by using a preconditioner or switching to a well-conditioned integral equation formulation might be considered beyond the scope of this framework. However, it is definitely in the roadmap of enhancing the stability of the numerical solver as it is a bold item of our ongoing work.

Contact

License

MIT License

Acknowledgments

Support in the form of computing resources was provided by KAUST Extreme Computing Research Center, KAUST Supercomputing Laboratory, KAUST Information Technology Research Division, Intel Parallel Computing Centers, and Cray Supercomputing Center of Excellence. In particular, the authors are very appreciative to Bilel Hadri of KAUST Supercomputer Laboratory for his great help and support throughout scalability experiments n the Shaheen supercomputer.

Papers