Part Ill: Parallel Programming models The Implicit Model 1. Basic Concept: With this approach, programmers write codes using a familiar sequential programming language, and then compiler is responsible to convert automatically it into a parallel codes(ex. KAP from kuck and Associates FORGE from Advanced Parallel Research) 2. Features Simpler semantics: no deadlock; always determinate Better portability due to sequential program Single thread of control makes testing, debugging correctness verification easier Disadvantages: Extremely difficult to develop autoparallel compiler; Autopar always is low-efficiency NHPCC(Hefei)·USTC· CHINA glchenaustc edu.ci
NHPCC(Hefei) •USTC •CHINA glchen@ustc.edu.cn The Implicit Model Part III:Parallel Programming Models 3 - 1
Part Ill: Parallel Programming models The data-Parallel model 1. Basic Concept: Data-Parallel model is the native model for SIMD machines. Data parallel programming emphasizes local computations and data routing operations. It can be implemented either on SIMD or on SPMD. Fortran90 and HPF are examples Features Single thread: as far as control fow is concerned. a data parallel program is just like a sequential program Parallel synchronous operation on large data structure (ex. Array etc. Loosely synchronous there is a synchronization after every statement Single address space: all variables reside in a single address space Explicit data allocation: users allocating data may reduce communication overhead Implicit communication: users don't have to specify communication operations NHPCC(Hefei)·USTC· CHINA glchenaustc edu.ci
NHPCC(Hefei) •USTC •CHINA glchen@ustc.edu.cn The Data-Parallel Model Part III:Parallel Programming Models 3 - 2
Part Ill: Parallel Programming models The Shared-Variable model 1. Basic Concept: The shared-variable programming is the native model for PVP, SMP and DSM machines. THere is an ANSI X3H5 standard. The portability of programs is problematic 2. Features Multiple threads: A shared variable program uses either SPMD(Single- Program-Multiple-Data)or MPMD (Multiple-Program-Multiple-Data) Asynchronous: Each process executes at its on pace Explicit synchronization: special synchronous operations(barrier, lock, critical region, event)are use Single address space: all variables reside in a single address space. Implicit data and computation distribution: because data can be considered in SM. there is no need to explicitly distribute data and computation. Implicit communication: communication is done implicitly through reading/ writing of shared variables NHPCC(Hefei)·USTC· CHINA glchenaustc edu.ci
NHPCC(Hefei) •USTC •CHINA glchen@ustc.edu.cn The Shared-Variable Model Part III:Parallel Programming Models 3 - 3
Part Ill: Parallel Programming models The message-Passing model 1. Basic Concept: The message passing programming is the native model for MPP, Cow. The portability of programs is enchanced greatly by PVM and MPI libraries 2. Features Multiple threads: A message passing program uses either SPMD ( Single- Program-Multiple-Data)or MPMD (Multiple-Program-Multiple-Data Asynchronous operations at different nodes. Explicit synchronization: special synchronous operations(barrier, lock, critical region, event)are used Multiple address space: The processes of a parallel program reside in different address space Explicit data mapping and workload allocation Explicit communication: The processes interact by executing message passing operation NHPCC(Hefei)·USTC· CHINA glchenaustc edu.ci
NHPCC(Hefei) •USTC •CHINA glchen@ustc.edu.cn The Message-Passing Model Part III:Parallel Programming Models 3 - 4
Part Ill: Parallel Programming models Comparison of Parallel Programming Models 】国3 NHPCC(Hefei)·USTC· CHINA glchen @ustc.ed.cl
NHPCC(Hefei) •USTC •CHINA glchen@ustc.edu.cn Comparison of Parallel Programming Models Part III:Parallel Programming Models 3 - 5