Contact

Position:
INAF – Osservatorio Astronomico di Trieste, Trieste, Italy
Address
Italy

Miscellaneous Information

Miscellaneous Information

Abstract Reference: 241
Identifier: I12.1
Presentation: Invited Speaker
Key Theme: 

Shall numerical astrophysics step into era of exascale computing?

Authors:
Taffoni Giuliano

The development of Exascale computing facilities with machines capable of executing O(10^18) operations per second will be characterised by significant and dramatic changes in computing hardware architecture from current  petascale capable super-computers.  
To build  an Exascale resource we need to address some major technology challenges related to Energy consumption, Network topology, Memory and Storage, Resilience and of course Programming model and Systems software.
From a computational science point of view, the architectural design of existing peta-scale supercomputers, where computing power is mainly delivered by accelerators (GPU, FPGA, Cell processors etc.), already impacts on scientific applications. This will become more evident on the future Exascale resources that will involve millions of processing units causing parallel application scalability issues due to sequential application parts, synchronising communication and other bottlenecks. Future applications must be designed to make systems with this number of computing units efficiently exploitable.

An approach based on hardware/software co-design is crucial to enable Exascale computing by solving the application-architecture performance gap (the gap between the peace capabilities of the hardware and the performance released by HPC software) contributing to the design of supercomputing resources that can be effectively exploited by  real scientific applications.

In Astronomy and Astrophysics, HPC numerical simulations are today one of the more effective instrument to compare observation with theoretical models, making HPC infrastructures a theoretical laboratory to test physical processes. Moreover they are mandatory during the preparatory phase and operational phase of scientific experiments. The size and  complexity of the new experiments (SKA, CTA, EUCLID, ATHENA, etc.) require bigger numerical laboratories, pushing toward  the use of Exascale computing capabilities.

This talk will summarise the major challenges to Exascale  and how much progress has been made in the last years in Europe. I will present the effort done by the ExaNeSt EU funded project to build  a prototype of an Exascale facility based on ARM CPUs and accelerators, designed using a hardware software co-design approach, where Astrophysical codes are playing a central role in defining network topology and storage system.  Finally I will discuss how the co-design will impact on Numerical Codes that must be re-engineered to profit of the Exascale supercomputers.