MPL-4.0: MP1-4.0 is a major .Martin Schulz,MPI-4.0 Chair,Info Object,External Interfaces Richard Graham,MPI-4.0 Treasurer .Wesley Bland,MPI-4.0 Secretary,Backward Incompatibilities 10 .William Gropp,MPI-4.0 Editor, eering Committee,Front Matter,Introduction, One-Sided Communications,and Bibliography 13 Rolf Rabenseifner,Steering Committee,Process Topologies,Deprecated Functions Removed Interfaces,Annex Language Bindings Summary,and Annex Change-Log. .Purushotham V.Bangalore,Language Bindings .Claudia Blaas-Schenner,Terms and Conventions 18 George Bosilca,Datatypes and Environmental Management 2 Ryan E.Grant,Partitioned Communication .Marc-Andre Hermanns,Tool Support Daniel Holmes,Point-to-Point Communication,Sessions Guillaume Mercier,Groups,Contexts,Communicators,Caching Howard Pritchard,Process Creation and Management .Anthony Skjellum,Collective Communication,I/O As part of the development of MPl-4.0,a number of working groups were established.In some cases,the work for these groups overlapped with multiple chapters.The following describes the major working groups and the leaders of those Collective Commur ion,To ology,Communicators Torsten Hoefler,Andrew Lumsdaine,and Anthony Skjellum Fault Tolerance Wesley Bland,Aurelien Bouteiller,and Richard Graham Hardware-Topologies Guillaume Mercier Hybrid Accelerator Pavan Balaji and James Dinan Large Counts Jeff Hammond Persistence Anthony Skjellum Point to Point Communication Daniel Holmes and Richard Graham Remote Memory Access William Gropp and Rajeev Thakur 48 Semantic Terms Purushotham V.Bangalore and Rolf Rabenseifner xxxvi
MPI-4.0: MPI-4.0 is a major update to the MPI Standard. The editors and organizers of the MPI-4.0 have been: • Martin Schulz, MPI-4.0 Chair, Info Object, External Interfaces • Richard Graham, MPI-4.0 Treasurer • Wesley Bland, MPI-4.0 Secretary, Backward Incompatibilities • William Gropp, MPI-4.0 Editor, Steering Committee, Front Matter, Introduction, One-Sided Communications, and Bibliography • Rolf Rabenseifner, Steering Committee, Process Topologies, Deprecated Functions, Removed Interfaces, Annex Language Bindings Summary, and Annex Change-Log. • Purushotham V. Bangalore, Language Bindings • Claudia Blaas-Schenner, Terms and Conventions • George Bosilca, Datatypes and Environmental Management • Ryan E. Grant, Partitioned Communication • Marc-Andr´e Hermanns, Tool Support • Daniel Holmes, Point-to-Point Communication, Sessions • Guillaume Mercier, Groups, Contexts, Communicators, Caching • Howard Pritchard, Process Creation and Management • Anthony Skjellum, Collective Communication, I/O As part of the development of MPI-4.0, a number of working groups were established. In some cases, the work for these groups overlapped with multiple chapters. The following describes the major working groups and the leaders of those groups: Collective Communication, Topology, Communicators Torsten Hoefler, Andrew Lumsdaine, and Anthony Skjellum Fault Tolerance Wesley Bland, Aur´elien Bouteiller, and Richard Graham Hardware-Topologies Guillaume Mercier Hybrid & Accelerator Pavan Balaji and James Dinan Large Counts Jeff Hammond Persistence Anthony Skjellum Point to Point Communication Daniel Holmes and Richard Graham Remote Memory Access William Gropp and Rajeev Thakur Semantic Terms Purushotham V. Bangalore and Rolf Rabenseifner 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 xxxvi
Sessions Daniel Holmes and Howard Pritchard Tools Kathryn Mohror and Marc-Andre Hermanns The following list includes some of the active participants who attended MPI Forum meetings or participated in the e-mail discussions. Julien Adam Abdelhalim Ame e 41 hotha Awan alore Moha Ba d a Bayatpour ste Besnare ge 12 13 14 Cha 15 ng Chu 16 rsten C o Compres Urena 18 19 ng 20 nna Daly Murali Emani ew D 21 Ch ristian Engelmanr Noah Evan Ana Gainaru 23 Marc Gamell Balmana 2 Esthela Gall Salvatore Di Girolamo 25 Bri Manjunath Gorentla Venkata Ryan E.Gran Stanley Graves William Gropp 29 Taylor Groves 29 Siegmar Gross Yanfei Guo Khaled Hamidouche Jeff Hammond Marc-Andre Hermanns Nathan Hjelm Torsten Hoefter Daniel Holmes Atsushi Hori Josh Hursey Ilya ivanov 3 Julien Jaeger Emmanuel .Jeannot Sylvain Jeaugey Jithin Jos Krishna Kandalla Takahiro Kawashima Chulho Kim Michael Knobloch Alice Koniges Sameer Kumar Kim Kyunghun Ignacio Laguna Peralta Stefan Lankes Tonglin Li 41 Xioyi Lu Kavitha Madhu Alexey malhanov Ryan Marshall William Marts Guillaume mercier Ali Mohammed Kathryn Mohror xxxvii
Sessions Daniel Holmes and Howard Pritchard Tools Kathryn Mohror and Marc-Andr´e Hermanns The following list includes some of the active participants who attended MPI Forum meetings or participated in the e-mail discussions. Julien Adam Abdelhalim Amer Charles Archer Ammar Ahmad Awan Pavan Balaji Purushotham V. Bangalore Mohammadreza Bayatpour Jean-Baptiste Besnard Claudia Blaas-Schenner Wesley Bland Gil Bloch George Bosilca Aurelien Bouteiller Ben Bratu Alexander Calvert Nicholas Chaimov Sourav Chakraborty Steffen Christgau Ching-Hsiang Chu Mikhail Chuvelev James Clark Carsten Clauss Isaias Alberto Compres Urena Giuseppe Congiu Brandon Cook James Custer Anna Daly Hoang-Vu Dang James Dinan Matthew Dosanjh Murali Emani Christian Engelmann Noah Evans Ana Gainaru Esthela Gallardo Marc Gamell Balmana Balazs Gerofi Salvatore Di Girolamo Brice Goglin Manjunath Gorentla Venkata Richard Graham Ryan E. Grant Stanley Graves William Gropp Siegmar Gross Taylor Groves Yanfei Guo Khaled Hamidouche Jeff Hammond Marc-Andr´e Hermanns Nathan Hjelm Torsten Hoefler Daniel Holmes Atsushi Hori Josh Hursey Ilya Ivanov Julien Jaeger Emmanuel Jeannot Sylvain Jeaugey Jithin Jose Krishna Kandalla Takahiro Kawashima Chulho Kim Michael Knobloch Alice Koniges Sameer Kumar Kim Kyunghun Ignacio Laguna Peralta Stefan Lankes Tonglin Li Xioyi Lu Kavitha Madhu Alexey Malhanov Ryan Marshall William Marts Guillaume Mercier Ali Mohammed Kathryn Mohror xxxvii 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
Takeshi Nanri Thomas Naughton Christoph Niethammer Takafumi Nose Lena Oden Steve Oyanagi Guillaume papaure Ivy Peng Antonio pena Simon Pickartz Artem polvakov Sreeram Potluri Howard Pritchard Martina Prugge Marc Perache rolf rabenseifner Nicholas Radcliffe Ken Raffenetti craig rasmusser Soren rasmussen Hubert ritzdorf Se rgio Rivas-Gomez Davide Rossetti Martin ruefenacht Amit ruhela Whit schonbein Joseph schuchart Martin Schulz Sangmin Seo Sameh Sharkawi Sameer Shende Min Si Anthony Skjellum Brian Smith David Solt Jeffrey M.S Srinivas sridharar Hari Subramoni Nawrin Sulta Savantan Su Hugo taboada Keita Teranishi Raieev Thakur Keith Under ood Akshay Venka esh Justin Wozniak Junchao Zha ng Dong Zhong Hui Zho The MPI Forum also acknowledges and appreciates the valuable input from people via e-mail and in person. 34 The following institutions supported the MPl-4.0 effort through time and travel support for the people listed above. ATOS 22 Argonne National Laboratory Arm Auburn University Barcelona Supercomputing Center 上EA Cisco Systems Inc. Cray Inc. EPCC,The University of Edinburgh ETH Zuirich Fujitsu Fulda University of Applied Sciences German Research School for Simulation Sciences Hewlett Packard Enterprise International business machines Institut National de Recherche en Informatique et Automatique(Inria) Intel Corporation 42 Jiilich Supercomputing Center.Forschungszentrum Jiilich KTH Royal Institute of Technology Kvushu University Lawrence Berkeley National Laboratory 46 Lawrence Livermore National Laboratory 47 Lenovo 48 Los Alamos National Laboratory xxxviii
Takeshi Nanri Thomas Naughton Christoph Niethammer Takafumi Nose Lena Oden Steve Oyanagi Guillaume Papaur´e Ivy Peng Antonio Pe˜na Simon Pickartz Artem Polyakov Sreeram Potluri Howard Pritchard Martina Prugger Marc P´erache Rolf Rabenseifner Nicholas Radcliffe Ken Raffenetti Craig Rasmussen Soren Rasmussen Hubert Ritzdorf Sergio Rivas-Gomez Davide Rossetti Martin Ruefenacht Amit Ruhela Whit Schonbein Joseph Schuchart Martin Schulz Sangmin Seo Sameh Sharkawi Sameer Shende Min Si Anthony Skjellum Brian Smith David Solt Jeffrey M. Squyres Srinivas Sridharan Hari Subramoni Nawrin Sultana Shinji Sumimoto Sayantan Sur Hugo Taboada Keita Teranishi Rajeev Thakur Keith Underwood Geoffroy Vallee Akshay Venkatesh Jerome Vienne Anh Vo Justin Wozniak Junchao Zhang Dong Zhong Hui Zhou The MPI Forum also acknowledges and appreciates the valuable input from people via e-mail and in person. The following institutions supported the MPI-4.0 effort through time and travel support for the people listed above. ATOS Argonne National Laboratory Arm Auburn University Barcelona Supercomputing Center CEA Cisco Systems Inc. Cray Inc. EPCC, The University of Edinburgh ETH Z¨urich Fujitsu Fulda University of Applied Sciences German Research School for Simulation Sciences Hewlett Packard Enterprise International Business Machines Institut National de Recherche en Informatique et Automatique (Inria) Intel Corporation J¨ulich Supercomputing Center, Forschungszentrum J¨ulich KTH Royal Institute of Technology Kyushu University Lawrence Berkeley National Laboratory Lawrence Livermore National Laboratory Lenovo Los Alamos National Laboratory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 xxxviii
Mellanox Technologies,Inc. Microsoft Corporation NEC Corporation NVIDIA Corporation Oak Ridge National Laboratory PAR-TEC Paratools Inc RIKEN AICS(R-CCS as of 2017) RWTH Aachen University Rutgers University Sandia National Laboratories Silicon Graphics.Inc. Technical University of Munich 13 14 15 Texas advanced cor puting Center 16 17 ama at Bi 18 University of Basel,Switzerland 19 ity of 20 itofn Urbana-Champain and the National Center for Supercomput 21 of I nsbruck 23 24 25 University igh I Computing(HLRS) 0g8 2 y of Tenne iversity ofT as at El Pas TU Wien Xxxix
Mellanox Technologies, Inc. Microsoft Corporation NEC Corporation NVIDIA Corporation Oak Ridge National Laboratory PAR-TEC Paratools, Inc. RIKEN AICS (R-CCS as of 2017) RWTH Aachen University Rutgers University Sandia National Laboratories Silicon Graphics, Inc. Technical University of Munich The HDF Group The Ohio State University Texas Advanced Computing Center Tokyo Institute of Technology University of Alabama at Birmingham University of Basel, Switzerland University of Houston University of Illinois at Urbana-Champaign and the National Center for Supercomputing Applications University of Innsbruck University of Oregon University of Potsdam University of Stuttgart, High Performance Computing Center Stuttgart (HLRS) University of Tennessee, Chattanooga University of Tennessee, Knoxville University of Texas at El Paso University of Tokyo VSC Research Center, TU Wien xxxix 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
2 3 4 5 Chapter 1 Introduction to MPl 10 11 13 13 1.1 Overview and Goals 14 15 MPI (Message-Passing Interface)is a message-passing library interface specification. All parts of this definition are significant.MPI addresses primarily the message passing parallel 1 programming model,in which data is moved from the address space of one process to that of another process through cooperative operations on each process.Extensions to the "classical"message-passing model are provided in collective operations,remote-memory access operations,dynamic process creation,and parallel I/O.MPI is a specification,not an implementation;there are multiple implementations of MPI.This specification is for a library interface;MPI is not a langu age,and all MPl operations are exp essed as functions subroutines, or methods accordin to the app ropriate lang age bindings which,for C and Fortran.are art of the MPl standardThe standard has d thr mputing vendors.computer scientists and tions ovide an overview of the history of MPI's deve opment. of o establishing a me ssing stand lard are n and ease of use In a distributed comm ment in which high outines sand/or abstractions pon lower level me itines,the ne s of standa the defi of a ndard.such as that rovides vendors with a c ed ba that they can i mta用i or in so cases for which they ride hardw t calability oal of the x Interfa stated.is to develo standard a widely used iting m ach the int sh ould establis rtable fo ssage passing 4 lete list als follo Design an application programming interface (not necessarily for compilers or a system implementation library). and Av dofoadt communicatopying,allow on co-processors,where .Allow for implementations that can be used in a heterogeneous environment 46 .Allow convenient C and Fortran bindings for the interface
Chapter 1 Introduction to MPI 1.1 Overview and Goals MPI (Message-Passing Interface) is a message-passing library interface specification. All parts of this definition are significant. MPI addresses primarily the message-passing parallel programming model, in which data is moved from the address space of one process to that of another process through cooperative operations on each process. Extensions to the “classical” message-passing model are provided in collective operations, remote-memory access operations, dynamic process creation, and parallel I/O. MPI is a specification, not an implementation; there are multiple implementations of MPI. This specification is for a library interface; MPI is not a language, and all MPI operations are expressed as functions, subroutines, or methods, according to the appropriate language bindings which, for C and Fortran, are part of the MPI standard. The standard has been defined through an open process by a community of parallel computing vendors, computer scientists, and application developers. The next few sections provide an overview of the history of MPI’s development. The main advantages of establishing a message-passing standard are portability and ease of use. In a distributed memory communication environment in which the higher level routines and/or abstractions are built upon lower level message-passing routines, the bene- fits of standardization are particularly apparent. Furthermore, the definition of a messagepassing standard, such as that proposed here, provides vendors with a clearly defined base set of routines that they can implement efficiently, or in some cases for which they can provide hardware support, thereby enhancing scalability. The goal of the Message-Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient, and flexible standard for message passing. A complete list of goals follows. • Design an application programming interface (not necessarily for compilers or a system implementation library). • Allow efficient communication: Avoid memory-to-memory copying, allow overlap of computation and communication, and offload to communication co-processors, where available. • Allow for implementations that can be used in a heterogeneous environment. • Allow convenient C and Fortran bindings for the interface. 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48