It has become inconceivable to carry out experiments or develop a new theory in applied sciences and engineering without the heavy support of numerical simulations during all stages of research and development. Any industrial product, from cars, trucks, trains, ships, airplanes, large buildings, bridges, skyscrapers, medical devices, to consumer products of value, will go through considerable computer simulation-based design and optimization during its development. It is a also true that computational engineering is an everyday practice not only in the design, optimization and verification of finished products, but also for the optimization of manufacturing processes in practically all industrial sectors.
Furthermore, high-end computing of this kind is increasingly being used to provide data bases for fast simulations of engineering models. This new area, also known as real-time computing, either interpolates from these detailed data bases, or extracts fundamental modes of the system to obtain reduced order models. Increased physical fidelity and high accuracy naturally imply large and increasing computing requirements. Some areas of research and some problem classes in industry, such as large eddy simulation for turbulence modelling in aeronautics, the study of the effect of natural hazards on constructions, the challenges in fusion research, or stochastic optimal design, to name a few, are beyond current computing capabilities. Large-scale (exaflop) computing is the answer to this overall increasing demand for computing power.
In the 2020 horizon, it is likely that exascale machines will be available. By that time, let us assume that we have the simulation of a simple cube running exascale machine with billions of elements (so, that we have already covered the hardware side of the challenge). What else is needed to get the solution for a real world problem, such as the simulation of the wind flow in a city (Fig. 1)?
The answer to this question is new numerical methods and computer codes for validated models ready to fill the technological gap required to solve grand-challenge applied science and engineering problems in the future exascale machines. Contributing to fill this gap is the main objective of NUMEXAS.
The overall aim of NUMEXAS is therefore to develop, implement and demonstrate the next generation of numerical simulation methods to be run under exascale computing architectures. This cannot be done by just scaling currently available codes, but by implementing a new paradigm for the development of advanced numerical methods to really exploit the intrinsic capabilities of the future exascale computing infrastructures.
The specific goal of NUMEXAS is the development of numerical methods for multiphysics problems in engineering based on validated models that enable scaling to millions of cores along the complete simulation pipeline.The major challenge in NUMEXAS will be the development of a new set of numerical methods and computer codes that will allow industries, governments and academia to routinely solve multidisciplinary large-scale class problems in applied sciences and engineering with high efficiency and simplicity. We strive to demonstrate good scalability of up to several tens of thousands of cores in practice and to predict the theoretical capability of significant further performance gains with even higher orders of numbers of cores.
The NUMEXAS methods and codes will be the main project outcomes that will be disseminated and exploited by the partners. Emphasis will be put in the dissemination and exploitation of the NUMEXAS outputs among SMEs in Europe.
In order to achieve the above mentioned goals, improvements are required along the whole simulation pipeline, including parallel pre-processing of analysis data and mesh generation, parallel, scalable, parallel field solvers in fluid, solid mechanics and coupled problems, optimum design parallel solvers considering uncertainties and parallel post-processing of numerical results.
Strategic value of exascale computing
Computer simulation-based methods sciences have become the third pillar of engineering sciences (alongside the traditional approaches of experiments and theory). It has become inconceivable to carry out experiments or develop a new theory without the heavy support of calculations during all stages of research, development and design. Any large-scale experiment in aeronautics (e.g. wind-tunnel), automotive (e.g. crash test), civil engineering (e.g. ultimate load carrying of structures), telecom engineering (e.g. efficiency of mobile communications), naval engineering (e.g. ship hydrodynamics, collision tests) is, nowadays preceded by a lengthy series of pre-experiment calculations (e.g. to make sure that measuring devices are operating in their expected ranges) and followed by another lengthy series of post-experiment calculations (e.g. to understand in depth the phenomena observed). The same procedures are required in all manufacturing industries. There is an overwhelming consensus that Simulation-based Engineering Sciences is the vehicle to achieve progress in science and engineering in the foreseeable future, where exascale computing will play a crucial role [4, 5, 6].
The current trend in engineering design is to increase physical fidelity (either by considering more scales or linking different disciplines), improve accuracy and robustness, and extend the range of feasible (credible) problem classes. Increased physical fidelity and accuracy in all fields in engineering (civil, mechanical, aerospace, naval, telecom, etc.) naturally imply high performance computing requirements. In each of these engineering fields, current high-end computing capabilities are clearly insufficient. Furthermore, high-end computing is gradually more being used to provide data bases for fast-running engineering models. This new area, also known as real-time computing, either interpolates from these detailed data bases, or extracts fundamental modes of the system to obtain a reduced order model.
The need for a next generation of faster computers to achieve these goals is obvious. Improved computational facilities will enhance the realism and accuracy of numerical simulations, leading to new discoveries and insights in engineering sciences.
In order to take full advantage of the next generation of high-end computers, simple accumulative advancement in existing numerical techniques will not work from at least a sustainability point of view. New computational technologies have to be incorporated, and drastic transformative changes have to be made in existing numerical methods as well.
Need for specific numerical methods for Exaflop Computing
On the software code side, a perception of maturity is prevalent. In each of the disciplines that encompass computational engineering we can find a large ecosystem of codes:
The unfortunate reality ?which is lost in this perception of maturity? is that none of these codes, and in particular the ones used most in practice, will scale to the numbers of compute nodes/cores required to reach exaflop rates. Indeed, the number of cores used for most jobs submitted at typical research centers with large (>10,000 cores) machines rarely exceeds 128-256 cores. Thus, a factor of 1:10,000 in parallelism is required to scale up to the million cores of an exaflop machine. Perhaps none of the current codes will scale to this level.
The following table summarizes what is the position, as far as achievement of complexity, of current available models to solve real-case engineering problems (represented in this case with two particular examples already solved by the partners). It also summarizes NUMEXAS? starting point as far as current performance of the available tools. To assess current capabilities, some indicators have been identified (such as number of elements of mesh, numbers of cores or number of unknowns) which give a measurable idea of the state-of-the-art situation.
Scientific and Technical objectives
In NUMEXAS we will develop next generation of numerical simulation techniques that are scalable to millions of cores so that exascale class problems can be solved routinely. The goal is the development and implementation of new numerical simulation techniques amenable to scalability to millions of cores along the complete simulation pipeline for a variety of large-scale multidisciplinary problems in applied sciences and engineering: parallel pre-processing and grid generation, parallel structured/unstructured field solvers of high order, parallel optimum design solvers considering uncertainties and parallel in-solver visualization and feature extraction. All of the numerical techniques will be implemented for hybrid OpenMP and GPU-based local processing and scalable, distributed, MPI-based off-node computing architectures.
Objective 1: Development of scalable, parallel pre-processing and mesh generation algorithms
Objective 2: Development of scalable, parallel structural and fluid mechanics solvers
Objective 3: Development of new scalable, parallel coupled multidisciplinary solvers
Objective 4: Development of scalable stochastic optimization solvers
Objective 5: Development of new scalable, parallel post-processing algorithms
Objective 6: Hardware benchmarking, profiling & optimization