Parallel programming introduces the students to the paradigm of parallel computing on a computer. Nowadays almost all computer systems include so-called multi-core chips. Hence in order to exploit the full performance of such systems one needs to employ parallel programming. This course covers shared-memory parallelization with OpenMP and java-Threads as well as parallelization with message passing on distributed-memory architectures with MPI. The course starts with a recap of the programming language C followed by a brief theoretical introduction to parallel computing. Next the course treats theoretical aspects like MPI communication, race conditions, deadlocks, efficiency as well as the problem of serialization. This course is accompanied by practical labs in which the students have the opportunity to apply the newly acquired concepts. After completing this course students will be able to write parallel programs with MPI and OpenMP on a basic level, and deal with any difficulties they may encounter.
Programming experience. The examples and exercises will be given in C, however any C/C++ or Java programmer will be able to solve these.
Parallel programming with MPI; Peter Pacheco; Morgan Kaufmann (1996); www.cs.usfca.edu/mpi/ (a very early revision is available online)
|parallel_programming_exam-prep_notes.pdf||284.0 KiB||2022/01/01 18:10|
|mock_exam.pdf||1.0 MiB||2021/10/17 18:10|
|introduction_pp.pdf||321.2 KiB||2021/10/15 18:10|
|openmp_1.pdf||872.9 KiB||2021/10/15 18:10|
|openmp_2_performance_until_slide_57_.pdf||246.1 KiB||2021/10/15 18:10|
|introduction.pdf||321.0 KiB||2021/09/21 18:10|
|mpi_2.pdf||1008.4 KiB||2021/09/21 18:10|
|mpi_1.pdf||321.5 KiB||2021/09/21 18:10|