Lecture High Performance Computing

The increasing complexity of scientific simulations solving problems in the natural sciences requires the use of suitable parallel computer systems. The efficient use of parallel computers necessitates an understanding of the microarchitecture to enable the development of strategies for using parallelism and optimizing performance. Furthermore, the simulation codes must be adapted at the level of parallel programming models or parallel algorithms correspondingly. In this lecture, prevalent methods and tools will be presented which are applied in the field of high-performance computing.

The objectives of the lecture are the comprehension of essential parallel computer architectures, knowledge of basic design methods and optimization strategies for serial and parallel algorithms, methods for runtime analysis of parallel applications, as well as the basic understanding of fundamental operations of parallel programming.

Content

  • Characteristics of micro architectures
  • Parallel computer architectures
  • Network topologies
  • Blocking algorithms to exploit data locality in deep memory hierarchies
  • Design principles of parallel algorithms
  • Modelling parallelism (speedup, efficiency, Amdahl) and performance
  • Introduction to parallel programming
  • Further selected topics

The list of current courses can be found at Teaching – Chair for Computer Science 12.