skip to content

Mathematical Research at the University of Cambridge

 

Numerical software is being reconstructed to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to requirements or worthiness for many models. So often are real datatypes defaulted to double precision that GPUs did not gain wide acceptance in simulation environments until they provided in hardware operations not required in their original domain of graphics. However, driven by performance or energy incentives, much of computational science is now reverting to employ lower precision arithmetic where possible. Many matrix operations considered at a blockwise level allow for lower precision and, in addition, many blocks can be approximated with low rank near equivalents. This leads to smaller memory footprint, which implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a look at campaigns in geospatial statistics and seismic processing that earned Gordon Bell Prize finalist status in, resp., 2022 and 2023.

Further information

Time:

16May
May 16th 2024
15:00 to 16:00

Venue:

Centre for Mathematical Sciences, MR14

Speaker:

David Keyes (KAUST)

Series:

Applied and Computational Analysis