Parallel Programming: Making Larger Computations Ever Faster- Parallel computing has become ubiquitous, with smart phones and laptops containing 6-core processors nowadays. The basic purpose of using several cores is to speed up the computations, and the purpose of connecting several processors is to solve larger problems by pooling their memory. This course will demonstrate the parallel programming using MPI (Message Passing Interface), the most common library of parallel communication commands for distributed-memory clusters. Registered students will gain access to a cluster in the UMBC High Performance Computing Facility (hpcf.umbc.edu) consisting of more than 50 compute nodes with two 18-core processors each.
What You Will Learn:
- First exposure to Linux, the professional operating system of all high-performance computers in the world, and its basic utilities.
- Gentle introduction to the C programming language by reading, modifying, and extending supplied code.
- Experience the power of parallel computing on a distributed-memory cluster with dozens of networked compute nodes.
- Finish the week with your own parallel performance study that shows code running only seconds compared to hours for serial code.
Why Take This Course? Realize the fundamental strategy behind all large-scale simulations today, like the daily weather forecast, hurricane prediction, simulation of airflow around a plane's wing, AI, Google's PageRank, and more. Bonus: with this background and experience, you can apply for internships at National Laboratories throughout the U.S., where the world's biggest supercomputers are located!
Live Online Synchronous: (LOS) Students are in class continuously for the duration of the class, for presentations, assignments and group work with an instructor present.