Parallel computing is found everywhere in modern computing. Multi-core CPUs and GPUs, supercomputers, and even mobile devices such as smartphones all provide ways to efficiently utilize parallel processing on these architectures and devices. The goal of this course is to provide an introduction to the foundations of parallel programming and to consider the performance gains and trade-offs involved in implementing and designing parallel computing systems. Specifically, this course will place an emphasis on concepts related to parallel programming on multicore processors.
Topics that will be explored in the course will include (but not limited to) the following:
- Processes and threads
- Shared memory
- Hardware mechanisms for parallel computing
- Synchronization and communication for parallel systems
- Performance optimizations
- Parallel data structures
- Memory consistency and hierarchies for parallel computing
- Patterns of parallel programming
- Parallel programming on GPUs
- Additional topics dependent on student request and time
The course will include weekly homework, two exams, and projects. The weekly assignments will contain practice problems to help enforce the concepts learned during a lecture. The projects provide the opportunity to apply the skills you learned to develop systems that can benefit from parallelization. Potential project domains could include: AI and machine learning, computer graphics, cryptocurrency technologies, scientific visualization, etc.
This course will not have a required textbook. Along with the lecture notes, students may find the following references helpful in undersntading the course material:
The Art of Multiprocessor Programming by Maurice Herlihy and Nir Shavit
Additional readings/references will be provided when necessary.