1.8.WHAT IS INCLUDED IN THE STANDARD? 5 networks of workstations,and combinations of all of these.In addition,shared-memory implementations,including those for multi-core processors and hybrid architectures,are possible.The paradigm will not be made obsolete by architectures combining the shared- 3 and distributed-memory views,or by increases in network speeds.It thus should be both possible and useful to implement this standard on a great variety of machines,including 5 those "machines"consisting of collections of other machines,parallel or not,connected by a communication network. > The interface is suitable for use by fully general MIMD programs,as well as those writ- ten in the more restricted style of SPMD.MPI provides many features intended to improve performance on scalable parallel computers with specialized interprocessor communication o hardware.Thus,we expect that native,high-performance implementations of MPl will be provided on such machines.At the same time,implementations of MPl on top of stan- dard Unix interprocessor communication protocols will provide portability to workstation 13 clusters and heterogenous networks of workstations. 14 15 1.8 What Is Included In The Standard? 17 18 The standard includes: g Point-to-point communication 20 21 ·Datatypes 品 Collective operations 24 ·Process groups 25 26 Communication contexts ·Process topologies 西 Environmental Management and inquiry 31 ·The info object 32 33 Process creation and management 34 One-sided communication ●External interfaces 37 ·Parallel file I/O Language Bindings for Fortran,C and C++ % 41 。Profiling interface 42 43 44 46
1.8. WHAT IS INCLUDED IN THE STANDARD? 5 networks of workstations, and combinations of all of these. In addition, shared-memory implementations, including those for multi-core processors and hybrid architectures, are possible. The paradigm will not be made obsolete by architectures combining the sharedand distributed-memory views, or by increases in network speeds. It thus should be both possible and useful to implement this standard on a great variety of machines, including those “machines” consisting of collections of other machines, parallel or not, connected by a communication network. The interface is suitable for use by fully general MIMD programs, as well as those written in the more restricted style of SPMD. MPI provides many features intended to improve performance on scalable parallel computers with specialized interprocessor communication hardware. Thus, we expect that native, high-performance implementations of MPI will be provided on such machines. At the same time, implementations of MPI on top of standard Unix interprocessor communication protocols will provide portability to workstation clusters and heterogenous networks of workstations. 1.8 What Is Included In The Standard? The standard includes: • Point-to-point communication • Datatypes • Collective operations • Process groups • Communication contexts • Process topologies • Environmental Management and inquiry • The info object • Process creation and management • One-sided communication • External interfaces • Parallel file I/O • Language Bindings for Fortran, C and C++ • Profiling interface 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CHAPTER 1.INTRODUCTION TO MPI 1.9 What Is Not Included In The Standard? The standard does not specify: 5 Operations that require more operating system support than is currently standard; 6 for example,interrupt-driven receives,remote execution,or active messages, Program construction tools, Debugging facilities. g 11 There are many features that have been considered and not included in this standard. 2 This happened for a number of reasons,one of which is the time constraint that was self- 13 imposed in finishing the standard.Features that are not included can always be offered as 14 extensions by specific implementations.Perhaps future versions of MPl will address some 15 of these issues. 16 1> 1.10 Organization of this Document 18 The following is a list of the remaining chapters in this document,along with a brief 20 description of each. 22 Chapter 2,MPI Terms and Conventions,explains notational terms and conventions 23 used throughout the MPI document. 24 25 Chapter 3,Point to Point Communication,defines the basic,pairwise communication 9 subset of MPl.Send and receive are found here,along with many associated functions 27 designed to make basic communication powerful and efficient. 28 29 Chapter 4,Datatypes,defines a method to describe any data layout,e.g.,an array of 吃 structures in the memory,which can be used as message send or receive buffer. 31 Chapter 5,Collective Communications,defines process-group collective communication 32 operations.Well known examples of this are barrier and broadcast over a group of 33 processes(not necessarily all the processes).With MPl-2,the semantics of collective 34 communication was extended to include intercommunicators.It also adds two new 35 collective operations. 36 37 .Chapter 6,Groups,Contexts,Communicators,and Caching,shows how groups of pro- 38 cesses are formed and manipulated,how unique communication contexts are obtained, 39 and how the two are bound together into a communicator. 40 Chapter 7,Process Topologies,explains a set of utility functions meant to assist in 41 42 the mapping of process groups (a linearly ordered set)to richer topological structures 43 such as multi-dimensional grids. 44 Chapter 8,MPI Environmental Management,explains how the programmer can manage 6 and make inquiries of the current MPl environment.These functions are needed for the 46 writing of correct,robust programs,and are especially important for the construction 47 of highly-portable message-passing programs
6 CHAPTER 1. INTRODUCTION TO MPI 1.9 What Is Not Included In The Standard? The standard does not specify: • Operations that require more operating system support than is currently standard; for example, interrupt-driven receives, remote execution, or active messages, • Program construction tools, • Debugging facilities. There are many features that have been considered and not included in this standard. This happened for a number of reasons, one of which is the time constraint that was selfimposed in finishing the standard. Features that are not included can always be offered as extensions by specific implementations. Perhaps future versions of MPI will address some of these issues. 1.10 Organization of this Document The following is a list of the remaining chapters in this document, along with a brief description of each. • Chapter 2, MPI Terms and Conventions, explains notational terms and conventions used throughout the MPI document. • Chapter 3, Point to Point Communication, defines the basic, pairwise communication subset of MPI. Send and receive are found here, along with many associated functions designed to make basic communication powerful and efficient. • Chapter 4, Datatypes, defines a method to describe any data layout, e.g., an array of structures in the memory, which can be used as message send or receive buffer. • Chapter 5, Collective Communications, defines process-group collective communication operations. Well known examples of this are barrier and broadcast over a group of processes (not necessarily all the processes). With MPI-2, the semantics of collective communication was extended to include intercommunicators. It also adds two new collective operations. • Chapter 6, Groups, Contexts, Communicators, and Caching, shows how groups of processes are formed and manipulated, how unique communication contexts are obtained, and how the two are bound together into a communicator. • Chapter 7, Process Topologies, explains a set of utility functions meant to assist in the mapping of process groups (a linearly ordered set) to richer topological structures such as multi-dimensional grids. • Chapter 8, MPI Environmental Management, explains how the programmer can manage and make inquiries of the current MPI environment. These functions are needed for the writing of correct, robust programs, and are especially important for the construction of highly-portable message-passing programs. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
1.10.ORGANIZATION OF THIS DOCUMENT 7 Chapter 9,The Info Object,defines an opaque object,that is used as input of several MPI routines. 3 .Chapter 10,Process Creation and Management,defines routines that allow for creation 4 of processes. 5 Chapter 11,One-Sided Communications,defines communication routines that can be 6 completed by a single process.These include shared-memory operations (put/get) and remote accumulate operations. Chapter 12,External Interfaces,defines routines designed to allow developers to layer 10 on top of MPl.This includes generalized requests,routines that decode MPI opaque 11 objects,and threads. 12 13 .Chapter 13,I/O,defines MPI support for parallel I/O. 14 Chapter 14,Profiling Interface,explains a simple name-shifting convention that any MPI implementation must support.One motivation for this is the ability to put performance profiling calls into MPI without the need for access to the MPI source 17 code.The name shift is merely an interface,it says nothing about how the actual 18 profiling should be done and in fact,the name shift can be useful for other purposes. 19 吃 .Chapter 15,Deprecated Functions,describes routines that are kept for reference.How- 日 ever usage of these functions is discouraged,as they may be deleted in future versions 22 of the standard. 23 Chapter 16,Language Bindings,describes the C++binding,discusses Fortran issues, 24 and describes language interoperability aspects between C,C++,and Fortran. 25 26 The Appendices are: 3分 导 Annex A,Language Bindings Summary,gives specific syntax in C,C++,and Fortran, 西 for all MPI functions,constants,and types. 吃 Annex B,Change-Log,summarizes major changes since the previous version of the 绵 standard. 32 Several Index pages are showing the locations of examples,constants and predefined 34 handles,callback routines'prototypes,and all MPI functions. 零 MPl provides various interfaces to facilitate interoperability of distinct MPl imple- 37 mentations.Among these are the canonical data representation for MPI I/O and for 38 MPI_PACK_EXTERNAL and MPI_UNPACK_EXTERNAL.The definition of an actual bind- 39 ing of these interfaces that will enable interoperability is outside the scope of this document. 40 A separate document consists of ideas that were discussed in the MPI Forum and deemed to have value,but are not included in the MPI Standard.They are part of the 41 42 "Journal of Development"(JOD),lest good ideas be lost and in order to provide a starting point for further work.The chapters in the JOD are 43 44 Chapter 2,Spawning Independent Processes,includes some elements of dynamic pro- cess management,in particular management of processes with which the spawning processes do not intend to communicate,that the Forum discussed at length but 47 ultimately decided not to include in the MPI Standard. 48
1.10. ORGANIZATION OF THIS DOCUMENT 7 • Chapter 9, The Info Object, defines an opaque object, that is used as input of several MPI routines. • Chapter 10, Process Creation and Management, defines routines that allow for creation of processes. • Chapter 11, One-Sided Communications, defines communication routines that can be completed by a single process. These include shared-memory operations (put/get) and remote accumulate operations. • Chapter 12, External Interfaces, defines routines designed to allow developers to layer on top of MPI. This includes generalized requests, routines that decode MPI opaque objects, and threads. • Chapter 13, I/O, defines MPI support for parallel I/O. • Chapter 14, Profiling Interface, explains a simple name-shifting convention that any MPI implementation must support. One motivation for this is the ability to put performance profiling calls into MPI without the need for access to the MPI source code. The name shift is merely an interface, it says nothing about how the actual profiling should be done and in fact, the name shift can be useful for other purposes. • Chapter 15, Deprecated Functions, describes routines that are kept for reference. However usage of these functions is discouraged, as they may be deleted in future versions of the standard. • Chapter 16, Language Bindings, describes the C++ binding, discusses Fortran issues, and describes language interoperability aspects between C, C++, and Fortran. The Appendices are: • Annex A, Language Bindings Summary, gives specific syntax in C, C++, and Fortran, for all MPI functions, constants, and types. • Annex B, Change-Log, summarizes major changes since the previous version of the standard. • Several Index pages are showing the locations of examples, constants and predefined handles, callback routines’ prototypes, and all MPI functions. MPI provides various interfaces to facilitate interoperability of distinct MPI implementations. Among these are the canonical data representation for MPI I/O and for MPI_PACK_EXTERNAL and MPI_UNPACK_EXTERNAL. The definition of an actual binding of these interfaces that will enable interoperability is outside the scope of this document. A separate document consists of ideas that were discussed in the MPI Forum and deemed to have value, but are not included in the MPI Standard. They are part of the “Journal of Development” (JOD), lest good ideas be lost and in order to provide a starting point for further work. The chapters in the JOD are • Chapter 2, Spawning Independent Processes, includes some elements of dynamic process management, in particular management of processes with which the spawning processes do not intend to communicate, that the Forum discussed at length but ultimately decided not to include in the MPI Standard. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
8 CHAPTER 1.INTRODUCTION TO MPI Chapter 3,Threads and MPl,describes some of the expected interaction between an MPl implementation and a thread library in a multi-threaded environment. 3 Chapter 4,Communicator ID,describes an approach to providing identifiers for com- 5 municators. 6 Chapter 5,Miscellany,discusses Miscellaneous topics in the MPI JOD,in particu- 7 lar single-copy routines for use in shared-memory environments and new datatype 8 constructors. 10 Chapter 6,Toward a Full Fortran 90 Interface,describes an approach to providing a more elaborate Fortran 90 interface. 12 13 Chapter 7,Split Collective Communication,describes a specification for certain non- 14 blocking collective operations. 15 Chapter 8,Real-Time MPl,discusses MPI support for real time processing. 16 好 19 20 21 22 23 24 25 27 29 31 3 34 35 36 鸣 38 39 40 41 42 48 44 > 48
8 CHAPTER 1. INTRODUCTION TO MPI • Chapter 3, Threads and MPI, describes some of the expected interaction between an MPI implementation and a thread library in a multi-threaded environment. • Chapter 4, Communicator ID, describes an approach to providing identifiers for communicators. • Chapter 5, Miscellany, discusses Miscellaneous topics in the MPI JOD, in particular single-copy routines for use in shared-memory environments and new datatype constructors. • Chapter 6, Toward a Full Fortran 90 Interface, describes an approach to providing a more elaborate Fortran 90 interface. • Chapter 7, Split Collective Communication, describes a specification for certain nonblocking collective operations. • Chapter 8, Real-Time MPI, discusses MPI support for real time processing. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
1 5 Chapter 2 6 > MPI Terms and Conventions 10 11 13 This chapter explains notational terms and conventions used throughout the MPl document, 14 some of the choices that have been made,and the rationale behind those choices.It is similar 15 to the MPl-1 Terms and Conventions chapter but differs in some major and minor ways. 16 Some of the major areas of difference are the naming conventions,some semantic definitions, 17 file objects,Fortran 90 us Fortran 77,C++,processes,and interaction with signals. 尔 20 2.1 Document Notation 3 22 Rationale.Throughout this document,the rationale for the design choices made in 23 the interface specification is set off in this format.Some readers may wish to skip 24 these sections,while readers interested in interface design may want to read them 25 carefully.(End of rationale.) 36 Advice to users.Throughout this document,material aimed at users and that 分 2 illustrates usage is set off in this format.Some readers may wish to skip these sections, while readers interested in programming in MPI may want to read them carefully.(End 多 吃 of advice to users.) 31 Advice to implementors.Throughout this document,material that is primarily 32 commentary to implementors is set off in this format.Some readers may wish to skip 33 these sections,while readers interested in MPl implementations may want to read 34 them carefully.(End of advice to implementors.) 35 36 吸 2.2 Naming Conventions 38 吃 In many cases MPI names for C functions are of the form MPI_Class_action_subset.This % convention originated with MPl-1.Since MPl-2 an attempt has been made to standardize the 41 names of MPI functions according to the following rules.The C++bindings in particular 42 follow these rules (see Section 2.6.4 on page 18). 1.In C,all routines associated with a particular type of MPl object should be of the form MPI_Class_action_subset or,if no subset exists,of the form MPI_Class_action. 46 In Fortran,all routines associated with a particular type of MPl object should be of 47 the form MPI_CLASS_ACTION_SUBSET or,if no subset exists,of the form 48 9
Chapter 2 MPI Terms and Conventions This chapter explains notational terms and conventions used throughout the MPI document, some of the choices that have been made, and the rationale behind those choices. It is similar to the MPI-1 Terms and Conventions chapter but differs in some major and minor ways. Some of the major areas of difference are the naming conventions, some semantic definitions, file objects, Fortran 90 vs Fortran 77, C++, processes, and interaction with signals. 2.1 Document Notation Rationale. Throughout this document, the rationale for the design choices made in the interface specification is set off in this format. Some readers may wish to skip these sections, while readers interested in interface design may want to read them carefully. (End of rationale.) Advice to users. Throughout this document, material aimed at users and that illustrates usage is set off in this format. Some readers may wish to skip these sections, while readers interested in programming in MPI may want to read them carefully. (End of advice to users.) Advice to implementors. Throughout this document, material that is primarily commentary to implementors is set off in this format. Some readers may wish to skip these sections, while readers interested in MPI implementations may want to read them carefully. (End of advice to implementors.) 2.2 Naming Conventions In many cases MPI names for C functions are of the form MPI_Class_action_subset. This convention originated with MPI-1. Since MPI-2 an attempt has been made to standardize the names of MPI functions according to the following rules. The C++ bindings in particular follow these rules (see Section 2.6.4 on page 18). 1. In C, all routines associated with a particular type of MPI object should be of the form MPI_Class_action_subset or, if no subset exists, of the form MPI_Class_action. In Fortran, all routines associated with a particular type of MPI object should be of the form MPI_CLASS_ACTION_SUBSET or, if no subset exists, of the form 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48