CHAPTER 1.INTRODUCTION TO MPI Restarting regular work of the MPI Forum was initiated in three meetings,at Eu- roPVM/MPI'06 in Bonn,at EuroPVM/MPI'07 in Paris,and at SC'07 in Reno.In De- cember 2007.a steering committee started the organization of new MPl Forum meetings at regular 8-weeks intervals.At the January 14-16,2008 meeting in Chicago,the MPI Forum decided to combine the existing and future MPl documents to one document for each ver- sion of the MPI standard.For technical and historical reasons,this series was started with MPL-1 3 Additional Ballots 3 and 4 solved old questions from the errata list started in 1995 p to new questions from the last vears.After all documents (MPl-1.1.MPl-2.Errata for MPI-11 (Oct.12.1998).and MPI-2.1 Ballots 1-4)were combined into one draft document 10 for each chapter a chapter author and review team were defined.They cleaned up the 11 document to achieve a consistent MPI-2.1 document.The final MPI-2.1 standard docume 12 was finished in June 2008,and finally released with a second vote in September 2008 in 13 the meeting at Dublin.just before EuroPVM/MPI'08.The major work of the current MPI Forum is the prep aration of MPl-3. 1.5 Background of MPI-2.2 MPI-2.2 is a minor update to the MPl-2.1 standard.This version addresses additional errors 19 0 and ambiguities that were not corrected in the MPI-2.1 standard as well as a small number of extensions to MPl-2.1 that met the following criteria: .Any correct MPI-2.1 program is a correct MPI-2.2 program Any extension must have significant benefit for users. .Any extension must not require significant implementation effort.To that end,all such changes are accompanied by an open source implementation. The disc 】 were propos were late 30 1.6 Background of MPI-3.0 MPI-3.0 is a major update to the MPI standard.The updates include the extension of collective operations to include nonblocking versions.extensions to the one-sided operations and a new Fortran 2008 binding.In addition,the deprecated C++bindings have been removed.as well as many of the deprecated routines and MPl objects (such as the MPL_UB datatype). 1.7 Background of MPl-3.1 MPI-3.1 is a minor update to the MPI standard.Most of the updates are corrections and clarifications to the standard,especially for the Fortran bindings.New functions added include routines to manipulate MPI_Aint values in a portable manner,nonblocking collective I/O routines,and routines to get the index value by name for MPI_T performance and control variables.A general index was also added
4 CHAPTER 1. INTRODUCTION TO MPI Restarting regular work of the MPI Forum was initiated in three meetings, at EuroPVM/MPI’06 in Bonn, at EuroPVM/MPI’07 in Paris, and at SC’07 in Reno. In December 2007, a steering committee started the organization of new MPI Forum meetings at regular 8-weeks intervals. At the January 14–16, 2008 meeting in Chicago, the MPI Forum decided to combine the existing and future MPI documents to one document for each version of the MPI standard. For technical and historical reasons, this series was started with MPI-1.3. Additional Ballots 3 and 4 solved old questions from the errata list started in 1995 up to new questions from the last years. After all documents (MPI-1.1, MPI-2, Errata for MPI-1.1 (Oct. 12, 1998), and MPI-2.1 Ballots 1-4) were combined into one draft document, for each chapter, a chapter author and review team were defined. They cleaned up the document to achieve a consistent MPI-2.1 document. The final MPI-2.1 standard document was finished in June 2008, and finally released with a second vote in September 2008 in the meeting at Dublin, just before EuroPVM/MPI’08. The major work of the current MPI Forum is the preparation of MPI-3. 1.5 Background of MPI-2.2 MPI-2.2 is a minor update to the MPI-2.1 standard. This version addresses additional errors and ambiguities that were not corrected in the MPI-2.1 standard as well as a small number of extensions to MPI-2.1 that met the following criteria: • Any correct MPI-2.1 program is a correct MPI-2.2 program. • Any extension must have significant benefit for users. • Any extension must not require significant implementation effort. To that end, all such changes are accompanied by an open source implementation. The discussions of MPI-2.2 proceeded concurrently with the MPI-3 discussions; in some cases, extensions were proposed for MPI-2.2 but were later moved to MPI-3. 1.6 Background of MPI-3.0 MPI-3.0 is a major update to the MPI standard. The updates include the extension of collective operations to include nonblocking versions, extensions to the one-sided operations, and a new Fortran 2008 binding. In addition, the deprecated C++ bindings have been removed, as well as many of the deprecated routines and MPI objects (such as the MPI_UB datatype). 1.7 Background of MPI-3.1 MPI-3.1 is a minor update to the MPI standard. Most of the updates are corrections and clarifications to the standard, especially for the Fortran bindings. New functions added include routines to manipulate MPI_Aint values in a portable manner, nonblocking collective I/O routines, and routines to get the index value by name for MPI_T performance and control variables. A general index was also added. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
1.8.WHO SHOULD USE THIS STANDARD? 5 1.8 Who Should Use This Standard? This standard is intended for use by all those who want to write portable message-passing programs in Fortran and C(and access the C bindings from C++).This includes individual application programmers,developers of software designed to run on parallel machines,and 5 creators of environments and tools.In order to be attractive to this wide audience,the standard must provide a simple,easy-to-use interface for the basic user while not seman- tically precluding the high-performance message-passing operations available on advanced machines. g 10 1.9 What Platforms Are Targets For Implementation? 11 13 The attractiveness of the message-passing paradigm at least partially stems from its wide 13 14 portability.Programs expressed this way may run on distributed-memory multiprocessors. 15 networks of workstations,and combinations of all of these.In addition,shared-memory 16 implementations,including those for multi-core processors and hybrid architectures,are 17 possible.The paradigm will not be made obsolete by architectures combining the shared 1 and distributed-memory views,or by increases in network speeds.It thus should be both 10 possible and useful to implement this standard on a great variety of machines.including those"machines"consisting of collections of other machines.parallel or not.connected by 21 a communication network. The interface is suitable for use by fully general MIMD programs,as well as those writ- ten in the more restricted style of SPMD.MPI provides many features intended to improve performance on scalable parallel computers with specialized interprocessor communication hardware.Thus,we expect that native,high-performance implementations of MPl will be provided on such machines.At the same time,implementations of MPI on top of stan- dard Unix interprocessor communication protocols will provide portability to workstation clusters and heterogenous networks of workstations. 1.10 What Is Included In The Standard? The standard includes Point-to-point communication. ·Datatypes, Collective operations ·Process groups, Communication contexts ·Process topologies Environmental management and inquiry ·The Info object, 46 47 .Process creation and management 48
1.8. WHO SHOULD USE THIS STANDARD? 5 1.8 Who Should Use This Standard? This standard is intended for use by all those who want to write portable message-passing programs in Fortran and C (and access the C bindings from C++). This includes individual application programmers, developers of software designed to run on parallel machines, and creators of environments and tools. In order to be attractive to this wide audience, the standard must provide a simple, easy-to-use interface for the basic user while not semantically precluding the high-performance message-passing operations available on advanced machines. 1.9 What Platforms Are Targets For Implementation? The attractiveness of the message-passing paradigm at least partially stems from its wide portability. Programs expressed this way may run on distributed-memory multiprocessors, networks of workstations, and combinations of all of these. In addition, shared-memory implementations, including those for multi-core processors and hybrid architectures, are possible. The paradigm will not be made obsolete by architectures combining the sharedand distributed-memory views, or by increases in network speeds. It thus should be both possible and useful to implement this standard on a great variety of machines, including those “machines” consisting of collections of other machines, parallel or not, connected by a communication network. The interface is suitable for use by fully general MIMD programs, as well as those written in the more restricted style of SPMD. MPI provides many features intended to improve performance on scalable parallel computers with specialized interprocessor communication hardware. Thus, we expect that native, high-performance implementations of MPI will be provided on such machines. At the same time, implementations of MPI on top of standard Unix interprocessor communication protocols will provide portability to workstation clusters and heterogenous networks of workstations. 1.10 What Is Included In The Standard? The standard includes: • Point-to-point communication, • Datatypes, • Collective operations, • Process groups, • Communication contexts, • Process topologies, • Environmental management and inquiry, • The Info object, • Process creation and management, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
6 CHAPTER 1.INTRODUCTION TO MPI One-sided communication ·External interfaces ·Parallel file I/O, Language bindings for Fortran and C, ·Tool support 10 1.11 What Is Not Included In The Standard? 12 The standard does not specify: 13 .Operations that require more operating system support than is currently standard; 15 for example,interrupt-driven receives,remote execution,or active messages, .Program construction tools, .Debugging facilities. There are ma This h for a number of offered as 23 mpls faar Iaddress some 25 1.12 Organization of this Document 28 The following is a list of the remaining chapters in this document,along with a brief 30 .Chapter 2,MPI Terms and Conventions,explains notational terms and conventions 32 3 used throughout the MPI document. Chapter3,Point-to-Point Com tof MP. munication,defines the basic,pairwis e c cation and recerve are found here,al cient .Chapter 4,Datatypes,defines a method to describe any data layout,e.g.,an array of 49 structures in the memory,which can be used as message send or receive buffer. Chapter 5,Collective Communication.defines process. roup collective communication operations.well known examples of this are barrier and broadcast over a group of ssarily all the processes).With MPl-2.the semantics of collective communication was extended to include intercommunicators.It also adds two new collective operations.MPI-3 adds nonblocking collective operations Chaptr.Gopom Caching showshow ro of pro d,how unique communication contexts are obtained and how the two are bound together into a communicator
6 CHAPTER 1. INTRODUCTION TO MPI • One-sided communication, • External interfaces, • Parallel file I/O, • Language bindings for Fortran and C, • Tool support. 1.11 What Is Not Included In The Standard? The standard does not specify: • Operations that require more operating system support than is currently standard; for example, interrupt-driven receives, remote execution, or active messages, • Program construction tools, • Debugging facilities. There are many features that have been considered and not included in this standard. This happened for a number of reasons, one of which is the time constraint that was selfimposed in finishing the standard. Features that are not included can always be offered as extensions by specific implementations. Perhaps future versions of MPI will address some of these issues. 1.12 Organization of this Document The following is a list of the remaining chapters in this document, along with a brief description of each. • Chapter 2, MPI Terms and Conventions, explains notational terms and conventions used throughout the MPI document. • Chapter 3, Point-to-Point Communication, defines the basic, pairwise communication subset of MPI. Send and receive are found here, along with many associated functions designed to make basic communication powerful and efficient. • Chapter 4, Datatypes, defines a method to describe any data layout, e.g., an array of structures in the memory, which can be used as message send or receive buffer. • Chapter 5, Collective Communication, defines process-group collective communication operations. Well known examples of this are barrier and broadcast over a group of processes (not necessarily all the processes). With MPI-2, the semantics of collective communication was extended to include intercommunicators. It also adds two new collective operations. MPI-3 adds nonblocking collective operations. • Chapter 6, Groups, Contexts, Communicators, and Caching, shows how groups of processes are formed and manipulated, how unique communication contexts are obtained, and how the two are bound together into a communicator. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
1.12.ORGANIZATION OF THIS DOCUMENT .Chapter 7,Process Topologies,explains a set of utility functions meant to assist in the g of pro s(a linearly ordered set)to richer topological structures 2 3 .Chapter8,MPI Environmental Management,explains how the programmer can man- 5 age and make inquiries of the current MPI environment.These functions are needed for the writing of correct,robust programs,and are especially important for the con- 7 struction of highly-portable message-passing programs 8 10 MPI routines. 11 .Chapter 10,Process Creation and Management,defines routines that allow for cre- 13 13 ation of processes. 14 Chapter 11 One-Sided communications defines communication routines that can be 15 completed by a single pr cess.These include shared-memory operations (put/get) and remote accumulate operations. 17 1 .Chapter 12,External Interfaces,defines routines designed to allow developers to layer on top of MP This includes generalized requests,routines that decode MPl opaque objects,and threads. 21 .Chapter 13,I/O,defines MPI support for parallel I/O. performance ana. and other t data about the This nSection 12 (Profing Iterie),which as chapter previous .Chapter 15,Deprecated Functions,describes routines that are kept for reference. However usage of these functions is discouraged,as they may be deleted in future versions of the standard. .Chapter 16,Removed Interfaces,describes routines and constructs that have been removed from MPI.These were deprecated in MPI-2,and the MPI Forum decided to remove these from the MPl-3 standard. Chapter 17,Language Binding discu sses Fortran issues,and describes language in- teroperability aspects between C and Fortran. The Appendices are .Annex A,Language Bindings Summary,gives specific syntax in C and Fortran,for 41 all MPI functions,constants,and types. .Annex B,Change-Log,summarizes some changes since the previous version of the .Several Index pages show the locations of examples,constants and predefined handles, 46 callback routine prototypes,and all MPI functions
1.12. ORGANIZATION OF THIS DOCUMENT 7 • Chapter 7, Process Topologies, explains a set of utility functions meant to assist in the mapping of process groups (a linearly ordered set) to richer topological structures such as multi-dimensional grids. • Chapter 8, MPI Environmental Management, explains how the programmer can manage and make inquiries of the current MPI environment. These functions are needed for the writing of correct, robust programs, and are especially important for the construction of highly-portable message-passing programs. • Chapter 9, The Info Object, defines an opaque object, that is used as input in several MPI routines. • Chapter 10, Process Creation and Management, defines routines that allow for creation of processes. • Chapter 11, One-Sided Communications, defines communication routines that can be completed by a single process. These include shared-memory operations (put/get) and remote accumulate operations. • Chapter 12, External Interfaces, defines routines designed to allow developers to layer on top of MPI. This includes generalized requests, routines that decode MPI opaque objects, and threads. • Chapter 13, I/O, defines MPI support for parallel I/O. • Chapter 14, Tool Support, covers interfaces that allow debuggers, performance analyzers, and other tools to obtain data about the operation of MPI processes. This chapter includes Section 14.2 (Profiling Interface), which was a chapter in previous versions of MPI. • Chapter 15, Deprecated Functions, describes routines that are kept for reference. However usage of these functions is discouraged, as they may be deleted in future versions of the standard. • Chapter 16, Removed Interfaces, describes routines and constructs that have been removed from MPI. These were deprecated in MPI-2, and the MPI Forum decided to remove these from the MPI-3 standard. • Chapter 17, Language Bindings, discusses Fortran issues, and describes language interoperability aspects between C and Fortran. The Appendices are: • Annex A, Language Bindings Summary, gives specific syntax in C and Fortran, for all MPI functions, constants, and types. • Annex B, Change-Log, summarizes some changes since the previous version of the standard. • Several Index pages show the locations of examples, constants and predefined handles, callback routine prototypes, and all MPI functions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
8 CHAPTER 1.INTRODUCTION TO MPI MPI provides various interfaces to facilitate interoperability of distinct MPl imple- mentations Among these are the canonical data repr esentation for MPI I/O and for MPI_PACK_EXTERNAL and MPI_UNPACK_EXTERNAL.The definition of an actual bind- ing of these interfaces that will enable interoperability is outside the scope of this document. A separate document consists of ideas that were discussed in the MPl Forum during the MPI-2 development and deemed to have value,but are not included in the MPI Standard. They are part of the "Journal of Development"(JOD),lest good ideas be lost and in order to provide a starting point for further work.The chapters in the JOD are 10 .Chapter 2,Spawning Independent Processes,includes some elements of dynamic pro- cess management,in particular management of processes with which the spawning processes do not intend to communicate,that the Forum discussed at length but ultimately decided not to include in the MPI Standard. cted interaction between an ent. 17 .Chapter 4,Communicator ID,describes an approach to providing identifiers for com- municators. 19 .Chapter 5,Miscellany,discusses Miscellaneous topics in the MPI JOD,in particu- 21 lar single-copy routines for use in shared-memory environments and new datatype constructors. ·Chapte 6.Towa d a Full Fo 90 Interface,describes an approach to providing a more elab orate Fortran 90 interface .Chapter 7,Split Collective Communication,describes a specification for certain non- blocking collective operations. .Chapter 8,Real-Time MPl,discusses MPI support for real time processing
8 CHAPTER 1. INTRODUCTION TO MPI MPI provides various interfaces to facilitate interoperability of distinct MPI implementations. Among these are the canonical data representation for MPI I/O and for MPI_PACK_EXTERNAL and MPI_UNPACK_EXTERNAL. The definition of an actual binding of these interfaces that will enable interoperability is outside the scope of this document. A separate document consists of ideas that were discussed in the MPI Forum during the MPI-2 development and deemed to have value, but are not included in the MPI Standard. They are part of the “Journal of Development” (JOD), lest good ideas be lost and in order to provide a starting point for further work. The chapters in the JOD are • Chapter 2, Spawning Independent Processes, includes some elements of dynamic process management, in particular management of processes with which the spawning processes do not intend to communicate, that the Forum discussed at length but ultimately decided not to include in the MPI Standard. • Chapter 3, Threads and MPI, describes some of the expected interaction between an MPI implementation and a thread library in a multi-threaded environment. • Chapter 4, Communicator ID, describes an approach to providing identifiers for communicators. • Chapter 5, Miscellany, discusses Miscellaneous topics in the MPI JOD, in particular single-copy routines for use in shared-memory environments and new datatype constructors. • Chapter 6, Toward a Full Fortran 90 Interface, describes an approach to providing a more elaborate Fortran 90 interface. • Chapter 7, Split Collective Communication, describes a specification for certain nonblocking collective operations. • Chapter 8, Real-Time MPI, discusses MPI support for real time processing. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48