Introduction to Parallel Computing
Table of Contents
Abstract
Overview
What is Parallel Computing?
Why Use Parallel Computing?
Concepts and Terminology
von Neumann Computer Architecture
Flynn's Classical Taxonomy
Some General Parallel Terminology
Parallel Computer Memory Architectures
Shared Memory
Distributed Memory
Hybrid Distributed-Shared Memory
Parallel Programming Models
Overview
Shared Memory Model
Threads Model
Message Passing Model
Data Parallel Model
Other Models
Designing Parallel Programs
Automatic vs. Manual Parallelization
Understand the Problem and the Program
Partitioning
Communications
Synchronization
Data Dependencies
Load Balancing
Granularity
I/O
Limits and Costs of Parallel Programming
Performance Analysis and Tuning
Parallel Examples
Array Processing
PI Calculation
Simple Heat Equation
1-D Wave Equation
References and More Information
This presentation covers the basics of parallel computing. Beginning with a brief overview and some concepts and terminology associated with parallel computing, the topics of parallel memory architectures and programming models are then explored. These topics are followed by a discussion on a number of issues related to designing parallel programs. The last portion of the presentation is spent examining how to parallelize several different types of serial programs.
Level/Prerequisites: None
Overview
To be run on a single computer having a single Central Processing Unit (CPU);
A problem is broken into a discrete series of instructions.
Instructions are executed one after another.
Only one instruction may execute at any moment in time.
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
To be run using multiple CPUs
A problem is broken into discrete parts that can be solved concurrently
Each part is further broken down to a series of instructions
Instructions from each part execute simultaneously on different CPUs