Subject: Message Passing Interface (MPI) FAQ
Supersedes: <doss-mpi-faq-3-1995@ERC.MsState.Edu>
Date: 17 May 1995 16:16:23 GMT
              Center for Computational Field Simulation
Summary: This posting contains a list of common questions (and their
         answers) about the Message Passing Interface standard (also 
         known as MPI).

Archive-Name: mpi-faq
Last-Modified: Wed, May 17 1995
Posting-Frequency: monthly
Version: $Id: mpi-faq.bfnn,v 1.33 1995/05/17 15:50:23 doss Exp $

This is the list of Frequently Asked Questions about the MPI (Message
Passing Interface) standard, a set of library functions for message
passing [see Q1.2 `What is MPI?' for more details].  For a list of the
latest changes to this document, see Q1.1 `Recent changes to the FAQ.'.

MPI questions/answers and pointers to additional MPI information are
actively sought.  Contributions are welcome!

You can skip to a particular question by searching for `Question n.n'.
See Q5.2 `Formats in which this FAQ is available' for details of where to
get the PostScript, Emacs Info, and HTML versions of this document.

For a list of recent changes to this FAQ, see Q1.1 `Recent changes to the
FAQ.'.

Note: An MPI Developers Meeting is being held June 22-23, 1995 at the
University of Notre Dame.  It is for developers of applications which use
the MPI standard.  Further information about the meeting can be found at
http://www.cse.nd.edu/mpidc95/ .

===============================================================================

Index

 Section 1.  Introduction and General Information
 Q1.1        Recent changes to the FAQ.
 Q1.2        What is MPI?
 Q1.3        What is the MPI Forum?
 Q1.4        Who was involved in creating the MPI standard?
 Q1.5        The history of MPI
 Q1.6        Are there plans for an MPI2?
 Q1.7        Are there plans for I/O extensions to MPI?
 Q1.8        How do I send comments about MPI to MPIF members?

 Section 2.  MPI Implementations
 Q2.1        What implementations are in progress?
 Q2.2        What freely available MPI implementations are currently available 
 Q2.3        Where can I get a version of MPI for platform X?
 Q2.4        Test Suites

 Section 3.  Additional sources of information about MPI
 Q3.1        What newsgroups and mailing lists are there for MPI?
 Q3.2        Where do I obtain a copy of the MPI document?
 Q3.3        What information about MPI is available through the WWW?
 Q3.4        MPI-related papers
 Q3.5        MPI-related presentations
 Q3.6        MPI tutorials
 Q3.7        MPI-related books
 Q3.8        Where can I find the errata for the MPI document?
 Q3.9        Are the MPI Forum mailing lists archived somewhere?
 Q3.10       Are the minutes from the MPIF forum meetings available?
 Q3.11       Where can I get example MPI programs?
 Q3.12       Miscellaneous MPI resources.

 Section 4.  How to get further assistance
 Q4.1        You still haven't answered my question !
 Q4.2        What to put in a posting about MPI

 Section 5.  Administrative information and acknowledgements
 Q5.1        Feedback is invited
 Q5.2        Formats in which this FAQ is available
 Q5.3        Where can I obtain a copy of this FAQ
 Q5.4        Authorship and acknowledgements
 Q5.5        Disclaimer and Copyright

===============================================================================

Section 1.  Introduction and General Information

 Q1.1        Recent changes to the FAQ.
 Q1.2        What is MPI?
 Q1.3        What is the MPI Forum?
 Q1.4        Who was involved in creating the MPI standard?
 Q1.5        The history of MPI
 Q1.6        Are there plans for an MPI2?
 Q1.7        Are there plans for I/O extensions to MPI?
 Q1.8        How do I send comments about MPI to MPIF members?

-------------------------------------------------------------------------------

Question 1.1.  Recent changes to the FAQ.

* Added pointer to WinMPI implementation in Q2.2 `What freely available
  MPI implementations are currently available and where do I get them?'.

* Added a note in the introduction about the MPI Developers meeting.

-------------------------------------------------------------------------------

Question 1.2.  What is MPI?

MPI stands for Message Passing Interface.  The goal of MPI, simply stated,
is to develop a widely used standard for writing message-passing programs.
As such the interface should establish a practical, portable, efficient,
and flexible standard for message passing.

Message passing is a paradigm used widely on certain classes of parallel
machines, especially those with distributed memory. Although there are
many variations, the basic concept of processes communicating through
messages is well understood. Over the last ten years, substantial progress
has been made in casting significant applications in this paradigm. Each
vendor has implemented its own variant. More recently, several systems
have demonstrated that a message passing system can be efficiently and
portably implemented. It is thus an appropriate time to try to define both
the syntax and semantics of a core of library routines that will be useful
to a wide range of users and efficiently implementable on a wide range of
computers.

In designing MPI the MPI Forum sought to make use of the most attractive
features of a number of existing message passing systems, rather than
selecting one of them and adopting it as the standard. Thus, MPI has been
strongly influenced by work at the IBM T. J. Watson Research Center,
Intel's NX/2, Express, nCUBE's Vertex, p4, and PARMACS. Other important
contributions have come from Zipcode, Chimp, PVM, Chameleon, and PICL.

The main advantages of establishing a message-passing standard are
portability and ease-of-use. In a distributed memory communication
environment in which the higher level routines and/or abstractions are
build upon lower level message passing routines the benefits of
standardization are particularly apparent.  Furthermore, the definition of
a message passing standard, such as that proposed here, provides vendors
with a clearly defined base set of routines that they can implement
efficiently, or in some cases provide hardware support for, thereby
enhancing scalability.

Source: MPI Document
(http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html)

-------------------------------------------------------------------------------

Question 1.3.  What is the MPI Forum?

Message Passing Interface Forum

The Message Passing Interface Forum (MPIF), with participation from over
40 organizations, has been meeting since November 1992 to discuss and
define a set of library interface standards for message passing. MPIF is
not sanctioned or supported by any official standards organization.

Source: MPI Document
(http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html)

-------------------------------------------------------------------------------

Question 1.4.  Who was involved in creating the MPI standard?

The technical development was carried out by subgroups, whose work was
reviewed by the full committee. During the period of development of the
Message Passing Interface ( MPI), many people served in positions of
responsibility and are listed below.

* Jack Dongarra, David Walker, Conveners and Meeting Chairs

* Ewing Lusk, Bob Knighten, Minutes

* Marc Snir, William Gropp, Ewing Lusk, Point-to-Point Communications

* Al Geist, Marc Snir, Steve Otto, Collective Communications

* Steve Otto, Editor

* Rolf Hempel, Process Topologies

* Ewing Lusk, Language Binding

* William Gropp, Environmental Management

* James Cownie, Profiling

* Anthony Skjellum, Lyndon Clarke, Marc Snir, Richard Littlefield, Mark
  Sears, Groups, Contexts, and Communicators

* Steven Huss-Lederman, Initial Implementation Subset

See the MPI document for a list of other active participants in the MPI
process not mentioned above.

Source:  MPI Document
(http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html)

-------------------------------------------------------------------------------

Question 1.5.  The history of MPI

The MPI standardization effort involved about 60 people from 40
organizations mainly from the United States and Europe. Most of the major
vendors of concurrent computers were involved in MPI, along with
researchers from universities, government laboratories, and industry. The
standardization process began with the Workshop on Standards for Message
Passing in a Distributed Memory Environment, sponsored by the Center for
Research on Parallel Computing, held April 29-30, 1992, in Williamsburg,
Virginia. At this workshop the basic features essential to a standard
message passing interface were discussed, and a working group established
to continue the standardization process.

A preliminary draft proposal, known as  MPI1
(ftp://netlib2.cs.utk.edu/mpi/mpi1.ps), was put forward by Dongarra,
Hempel, Hey, and Walker in November 1992, and a revised version was
completed in February 1993. MPI1 embodied the main features that were
identified at the Williamsburg workshop as being necessary in a message
passing standard. Since MPI1 was primarily intended to promote discussion
and ``get the ball rolling,'' it focused mainly on point-to-point
communications. MPI1 brought to the forefront a number of important
standardization issues, but did not include any collective communication
routines and was not thread-safe.

In November 1992, a meeting of the MPI working group was held in
Minneapolis, at which it was decided to place the standardization process
on a more formal footing, and to generally adopt the procedures and
organization of the High Performance Fortran Forum. Subcommittees were
formed for the major component areas of the standard, and an email
discussion service established for each. In addition, the goal of
producing a draft MPI standard by the Fall of 1993 was set. To achieve
this goal the MPI working group met every 6 weeks for two days throughout
the first 9 months of 1993, and presented the draft MPI standard at the
Supercomputing 93 conference in November 1993. These meetings and the
email discussion together constituted the MPI Forum, membership of which
has been open to all members of the high performance computing community.

Source: MPI Document
(http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html)

-------------------------------------------------------------------------------

Question 1.6.  Are there plans for an MPI2?

MPI2 Meetings have begun.   Argonne National Lab maintains a web page
detailing the MPI 2 effort at http://www.mcs.anl.gov/mpi/mpi2/mpi2.html .
Oak Ridge National Lab also maintains a web page on the MPI 2 effort at
http://www.epm.ornl.gov/~walker/mpi/mpi2.html .

It was decided at the final MPI[1] meeting (Feb. 1994) that plans for
extending MPI should wait until people have had some experience with the
current version of MPI.  The MPI Forum held a BOF session at
Supercomputing '94 to discuss the possibility of an MPI2 effort.

A discussion of possible MPI2 extensions was held at the end of the
February 1994 meeting.  The following items were mentioned as possible
areas of expansion.

* I/O

* Active messages

* Process startup

* Dynamic process control

* Remote store/access

* Fortran 90 and C++ language bindings

* Graphics

* Real-time support

* Other "enhancements"

-------------------------------------------------------------------------------

Question 1.7.  Are there plans for I/O extensions to MPI?

Working together, IBM Research and NASA Ames have drafted MPI-IO, a
proposal to address the portable parallel I/O problem.  In a nutshell,
this proposal is based on the idea that I/O can be modeled as message
passing: writing to a file is like sending a message, and reading from a
file is like receiving a message.  MPI-IO intends to leverage the
relatively wide acceptance of the MPI interface in order to create a
similar I/O interface.

The current proposal represents the result of extensive discussions (and
arguments), but is by no means finished.  Many changes can be expected as
additional participants join the effort to define an interface for
portable I/O.

The current proposal, presented at Supercomputing '94 in mid November, is
available on the Web at  http://lovelace.nas.nasa.gov/MPI-IO/mpi-io.html .

They are soliciting greater participation from the high performance
computing community, and are particularly interested in feedback on the
proposal.  Feedback may be sent to: mpi-io@nas.nasa.gov

To participate in the MPI-IO discussion, you can join the mailing list by
sending a message to "mpi-io-request@nas.nasa.gov" with an empty Subject,
and the single line body, "subscribe mpi-io YOUR-REAL-NAME" Your email
address will be automatically taken from the message.

You may comment on the draft by sending mail to the mailing list
regardless of whether you join.  If you want to be an observer only, the
mailing list will be archived at the Web site.

Source:  Modified from the  MPI-IO Call for Participation posted to
comp.parallel and other groups in November, 1994.
(ftp://ftp.erc.msstate.edu/pub/mpi/mpi-io/cfp)

-------------------------------------------------------------------------------

Question 1.8.  How do I send comments about MPI to MPIF members?

You can send comments to mpi-comments@cs.utk.edu.  Your comments will be
forwarded to MPIF committee members who will attempt to respond.

Source: MPI Document
(http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html)

===============================================================================

Section 2.  MPI Implementations

 Q2.1        What implementations are in progress?
 Q2.2        What freely available MPI implementations are currently available 
 Q2.3        Where can I get a version of MPI for platform X?
 Q2.4        Test Suites

-------------------------------------------------------------------------------

Question 2.1.  What implementations are in progress?

Rusty Lusk and Bill Gropp have set up a mailing list for  MPI
implementors; the address is mpi-impl@mcs.anl.gov.  To subscribe to this
list, send mail to mpi-impl-request@mcs.anl.gov.

Rusty Lusk and Bill Gropp also hosted a workshop for MPI Implementors at
Argonne National Laboratory, September 7-9, 1994.   A paper on the
workshop is available on the WWW at
http://www.mcs.anl.gov/mpi/mpiimpl/paper/paper.html .

* Companies and Vendors representatives

  Convex, Cray, Hewlett-Packard, Hughes Aircraft, IBM, Intel, KSR, Meiko,
  Myricom (makers of high-performance network switches), NEC, PALLAS  (a
  German software company), and Sun were representated.

* University and Lab representatives

  Argonne National Lab, U.C. Berkeley, University of Edinburgh,
  University of Illinois, Mississippi State, Ohio Supercomputing Center,
  and Sandia National Lab.

Researchers at Australian National University have implemented MPI on the
Fujitsu AP1000.  Information is available through the WWW at
file://dcssoft.anu.edu.au/pub/www/dcs/cap/mpi/mpi.html .

-------------------------------------------------------------------------------

Question 2.2.  What freely available MPI implementations are currently available and where do I get them?

* Argonne National Laboratory/Mississippi State University implementation.

  Available by anonymous ftp at ftp://info.mcs.anl.gov/pub/mpi .

* Edinburgh Parallel Computing Centre CHIMP implementation.

  Available by anonymous ftp at
  ftp://ftp.epcc.ed.ac.uk/pub/chimp/release/chimp.tar.Z .

* Mississippi State University UNIFY implementation.

  The UNIFY system provides a subset of MPI within the PVM environment,
  without sacrificing the PVM calls already available.

  Available by anonymous ftp at ftp://ftp.erc.msstate.edu/unify .

* Ohio Supercomputer Center LAM implementation.

  A full MPI standard implementation for LAM, a UNIX cluster computing
  environment.

  Available by anonymous ftp at ftp://tbag.osc.edu/pub/lam .

* University of Nebraska at Omaha WinMPI implementation.

  WinMPI is an MPI implementation for MS-Windows 3.1 that's available from
  ftp://csftp.unomaha.edu/pub/rewini/WinMPI .

-------------------------------------------------------------------------------

Question 2.3.  Where can I get a version of MPI for platform X?

The freely available versions of MPI [see Q2.2 `What freely available MPI
implementations are currently available and where do I get them?']  port
to a wide variety of platforms.  The following lists of platforms may be
of date and/or incorrect.  New ports are a common occurance.  Please check
with the authors of these implementations for the authoritative list of
platforms supported.  This question was last updated on 4/17/95.

* MPICH is supported on a variety of parallel computers and workstation
  networks.  Parallel computers that are supported include:


IBM SP1, SP2 (using various communication options)
TMC CM-5
Intel Paragon, IPSC860, Touchstone Delta
Ncube2
Meiko CS-2
Kendall Square KSR-1 and KSR-2
SGI and Sun Multiprocessors
  Workstations supported include:


Sun4 family running SunOS or Solaris
Hewlett-Packard
DEC 3000 and Alpha
IBM RS/6000 family
SGI
Intel 386- or 486-based PC clones running the LINUX or FreeBSD OS
  A third-party preliminary Parix implementation for Parsytec-PowerPC
  based machines (Parix 1.3) is being developed by Lutz Laemmer
  (laemmer@iib.bauwesen.th-darmstadt.de).  Below is the status of this
  implementation:

According to the example-adi's in the mpich/mpid directories we have developed and
tested a Parix-adi. The adi works well on top of Parix 1.3 but may be on
T800-systems, too. 

The work is public domain and we are looking for interested users, willing to cooperate
in further testing and tuning. 
  Sources:  The Web page on MPICH at
  http://www.mcs.anl.gov/home/lusk/mpich and the "Configure MPICH" section
  of the MPICH Users guide at
  http://www.mcs.anl.gov/home/lusk/mpich/users.guide/ .  The MPICH Users
  guide is also available with the MPICH distribution.

* CHIMP supports the following platforms:

Sun workstations runing SunOS 4.1.x or Solaris 2.x
SGIs with IRIX 4 or IRIX 5
IBM RS/6000 running AIX3.2
Sequent Symmetry
DEC Alpha AXP running OSF1
Meiko Computing Surface with transputer, i860 or SPARC nodes
  Source:  The Chimp installation guide available at
  ftp://ftp.epcc.ed.ac.uk/pub/chimp/release/doc/install.ps.Z .

* UNIFY runs atop PVM and therefore is portable to the same platforms as
  PVM.  For the latest list of supported architectures, see the
  "doc/arches" file in the PVM3 distribution [PVM3 is available at
  http://www.netlib.org/pvm3 or ftp://netlib2.cs.utk.edu/pvm3 ].

* LAM is portable to most unix systems and includes standard support for
  the following platforms.


Sun 4 (sparc), SunOS 4.1.3
Sun 4 (sparc), Solaris 2.3
SGI IRIX 4.0.5
IBM RS/6000, AIX v3r2
DEC AXP, OSF/1 V2.0
HP 9000/755, HP-UX 9.01
  Third party ports included in LAM.

i386 (and above), SCO 3.2 v4.2

  Source: ftp://tbag.osc.edu/pub/lam/Readme .

* WinMPI is based on MPICH and works under Windows 3.1.  It  currently
  requires Microsoft Visual C++ 1.5.

-------------------------------------------------------------------------------

Question 2.4.  Test Suites

Argonne National Lab maintains a WWW page at
http://www.mcs.anl.gov/Projects/mpi/mpi-test/tsuite.html that lists MPI
test suites.  They also maintain an ftp repository at
ftp://info.mcs.anl.gov/pub/mpi/mpi-test for these test suites.

===============================================================================

Section 3.  Additional sources of information about MPI

 Q3.1        What newsgroups and mailing lists are there for MPI?
 Q3.2        Where do I obtain a copy of the MPI document?
 Q3.3        What information about MPI is available through the WWW?
 Q3.4        MPI-related papers
 Q3.5        MPI-related presentations
 Q3.6        MPI tutorials
 Q3.7        MPI-related books
 Q3.8        Where can I find the errata for the MPI document?
 Q3.9        Are the MPI Forum mailing lists archived somewhere?
 Q3.10       Are the minutes from the MPIF forum meetings available?
 Q3.11       Where can I get example MPI programs?
 Q3.12       Miscellaneous MPI resources.

-------------------------------------------------------------------------------

Question 3.1.  What newsgroups and mailing lists are there for MPI?

An MPI-specific newsgroup,  comp.parallel.mpi (news:comp.parallel.mpi),
was recently been created by a  vote of 506 to 14
(ftp://ftp.erc.msstate.edu/pub/mpi/newsgroup/result) .  The RFD
(ftp://ftp.erc.msstate.edu/pub/mpi/newsgroup/rfd) for comp.parallel.mpi
was originally posted to  comp.parallel (news:comp.parallel),
comp.parallel.pvm (news:comp.parallel.pvm), and  news.announce.newgroups
(news:news.announce.newgroups) on April 4, 1994.  The  CFV
(ftp://ftp.erc.msstate.edu/pub/mpi/newsgroup/cfv) was issued June 15,
1994.  The voting results, RFD, and CFV can be retrieved by anonymous ftp
from ftp.erc.msstate.edu as pub/mpi/newsgroup/result,
pub/mpi/newsgroup/rfd and pub/mpi/newsgroup/cfv.

The MPI Forum ran several mailing lists which are now archived  [see Q3.9
`Are the MPI Forum mailing lists archived somewhere?'] on netlib.  These
are no longer active.

-------------------------------------------------------------------------------

Question 3.2.  Where do I obtain a copy of the MPI document?

The official postscript version of the document can be obtained from
netlib at ORNL by sending a mail message to netlib@ornl.gov with the
message "send mpi-report.ps from mpi".

It may also be obtained by anonymous ftp from the following sites:

* ftp://netlib2.cs.utk.edu/mpi/mpi-report.ps

* ftp://ftp.erc.msstate.edu/pub/mpi/docs/mpi-report.ps.Z

* ftp://info.mcs.anl.gov/pub/mpi/mpi-report.ps.Z

* ftp://tbag.osc.edu/pub/lam/mpi-report.ps.Z

Argonne National Lab also provides a hypertext version available through
the WWW at http://www.mcs.anl.gov/mpi/mpi-report/mpi-report.html .

-------------------------------------------------------------------------------

Question 3.3.  What information about MPI is available through the WWW?

The following is a list of URL's which contain MPI related information.

* Netlib Repository at UTK/ORNL (http://www.netlib.org/mpi/index.html)

* Argonne National Lab (http://www.mcs.anl.gov/mpi)

* Mississippi State University, Engineering Research Center
  (http://www.erc.msstate.edu/mpi)

* Ohio Supercomputer Center, LAM Project (http://www.osc.edu/lam.html)

* Australian National University
  (file://dcssoft.anu.edu.au/pub/www/dcs/cap/mpi/mpi.html)

* Oak Ridge National Laboratory (http://www.epm.ornl.gov/~walker/mpi/)

-------------------------------------------------------------------------------

Question 3.4.  MPI-related papers

David Walker maintains a list of MPI related papers at
http://www.epm.ornl.gov/~walker/mpi/papers.html .

A bibliography (in BibTeX format) of MPI related papers is available by
anonymous ftp at ftp://ftp.erc.msstate.edu/pub/mpi/bib/MPI.bib .  It is
also available on the WWW from http://www.erc.msstate.edu/mpi/mpi-bib.html
.  It is no longer being actively maintained.

-------------------------------------------------------------------------------

Question 3.5.  MPI-related presentations

* Bill Saphir has made several presentations about MPI  available thorough
  his WWW home page
  (http://lovelace.nas.nasa.gov/Parallel/People/wcs_homepage.html).

* Edinburgh Parallel Computing Centre has made a Technology Watch Report
  about MPI available at
  http://www.epcc.ed.ac.uk/epcc-tec/documents/techwatch-mpi/mpi-tw.book_1.html
  It contains history, overview, and current status information about MPI.

-------------------------------------------------------------------------------

Question 3.6.  MPI tutorials

* The Ohio Supercomputing Center has begun a list of  quick tutorials,
  (http://www.osc.edu/Lam/mpi/mpi_tut.html) on MPI.

* The tutorial "The LAM companion to `Using MPI'"  can be obtained using
  from ftp://cisr.anu.edu.au/pub/papers/meglicki/mpi/tutorial/mpi/mpi.html
  or, just the LaTeX file:
  ftp://cisr.anu.edu.au/pub/papers/meglicki/mpi/tutorial/mpi.tex .  The
  tutorial is based on the book ``Using MPI, Portable Parallel
  Programming with the Message-Passing Interface'',  but is largely
  self-contained.  It should not be seen as a replacement for the book --
  rather, it is just what the title says: the companion to the book for
  LAM users.

  Source:  Zdzislaw Meglicki, Zdzislaw.Meglicki@cisr.anu.edu.au.

* The Albuquerque Resource Center at the University of New Mexico provide
  a short introduction to MPI at
  http://www.arc.unm.edu/workshop/mpi/mpi.html .

* Peter Pacheco from the University of San Francisco has made a draft
  version of an MPI tutorial geared for inexperienced users available from
  ftp://math.usfca.edu/pub/MPI/mpi.guide.ps .

* `MPI: From Fundamentals To Applications' is tutorial by David Walker.
  It is available through the web at
  http://www.epm.ornl.gov/~walker/mpi/SLIDES/mpi-tutorial.html or as a
  postscript document at
  http://www.epm.ornl.gov/~walker/mpi/papers/mpi-tutorial.ps.Z .  A
  shorter postscript version is also available at
  http://www.epm.ornl.gov/~walker/mpi/papers/talk90min.ps.Z .

* `An Introduction to the MPI Standard', a paper by Jack Dongarra, Steve
  Otto, Marc Snir, and David Walker is available at
  http://www.netlib.org/utk/papers/intro-mpi/intro-mpi.html .

* Steve Otto has made slides available from a talk he gave on MPI.  The
  slides are available from http://www.cse.ogi.edu/~otto/MPI_Talk.ps .

-------------------------------------------------------------------------------

Question 3.7.  MPI-related books

* William Gropp, Ewing Lusk, and Anthony Skjellum,  `USING MPI: Portable
  Parallel Programming with the Message-Passing  Interface' (MIT Press,
  1994, 328 pages, Paperback, $24.95) Information on the book, including
  ordering instructions, can be found  at:
  http://www-mitpress.mit.edu/mitp/recent-books/comp/gropp.html The
  example programs from this book are available at
  ftp://info.mcs.anl.gov/pub/mpi/using .

* Ian Foster's online book entitled `Designing and Building Parallel
  Programs' (http://www.mcs.anl.gov/dbpp) (ISBN 0-201-57594-9; published
  by  Addision Weseley (http://www.aw.com) ) includes a chapter on MPI.
  It provides a succinct and readable introduction to an MPI subset.

* Peter Pacheco is writing a book entitled `Programming Parallel
  Processors Using MPI" which will be available from Morgan Kaufmann in
  Fall, 1995.

* The standard has been published as a journal article in the
  International Journal of Supercomputing Applications, Volume 8, Number
  3/4, 1994.

* Steve Otto (Oregon Graduate Institute of Science & Technology) and
  others are currently writing an Annotated Reference Manual for MPI.

-------------------------------------------------------------------------------

Question 3.8.  Where can I find the errata for the MPI document?

  An early version of an errata can be obtained by anonymous from ftp at
  ftp://ftp.erc.msstate.edu/pub/mpi/docs/mpi-errata.ps .

-------------------------------------------------------------------------------

Question 3.9.  Are the MPI Forum mailing lists archived somewhere?

  Yes.  They are available from netlib.  Send a message to netlib@ornl.gov
  with the message "send index from mpi".  You can also ftp them from
  ftp://netlib2.cs.utk.edu/mpi .

  The following archived lists are available:

  * whole committee (ftp://netlib2.cs.utk.edu/mpi/mpi-comm 2364K)

  * core MPIF members (ftp://netlib2.cs.utk.edu/mpi/mpi-core 609K)

  * introduction subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-intro 41K)

  * point-to-point subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-pt2pt
    3862K)

  * collective communication subcommittee
    (ftp://netlib2.cs.utk.edu/mpi/mpi-collcomm 1539K)

  * process topology subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-ptop
    1193K)

  * language binding subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-lang
    211K)

  * formal language description subcommittee
    (ftp://netlib2.cs.utk.edu/mpi/mpi-formal 72K)

  * environment inquiry subcommittee
    (ftp://netlib2.cs.utk.edu/mpi/mpi-envir 140K)

  * profiling subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-profile 112K)

  * context subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-context 4618K)

  * subset subcommittee (ftp://netlib2.cs.utk.edu/mpi/mpi-iac 433K)

-------------------------------------------------------------------------------

Question 3.10.  Are the minutes from the MPIF forum meetings available?

  The minutes from some of the MPIF meetings are available from netlib.
  Send a message to netlib@ornl.gov with the message "send index from
  mpi".  You can also ftp them from ftp://netlib2.cs.utk.edu/mpi .

  There are minutes from the following meetings:

  * January, 1993 (ftp://netlib2.cs.utk.edu/mpi/minutes-jan)

  * February, 1993 (ftp://netlib2.cs.utk.edu/mpi/minutes-feb)

  * April, 1993 (ftp://netlib2.cs.utk.edu/mpi/minutes-apr)

  * August, 1993 (ftp://netlib2.cs.utk.edu/mpi/minutes-aug)

-------------------------------------------------------------------------------

Question 3.11.  Where can I get example MPI programs?

  Most implementations mentioned in Q2.2 `What freely available MPI
  implementations are currently available and where do I get them?' are
  distributed with some example programs.

  The examples from "Using MPI" are located at
  ftp://info.mcs.anl.gov/pub/mpi/using .

-------------------------------------------------------------------------------

Question 3.12.  Miscellaneous MPI resources.

  The LAM developers, in the course of teaching parallel workshops,
  produced a quick reference card for MPI.  It lists every function, but
  only provides syntax for a subset.  It is available from their anonymous
  ftp server at  ftp://tbag.osc.edu/pub/lam/mpi-quick-ref.ps.Z .

===============================================================================

Section 4.  How to get further assistance

 Q4.1        You still haven't answered my question !
 Q4.2        What to put in a posting about MPI

-------------------------------------------------------------------------------

Question 4.1.  You still haven't answered my question !

  Try posting your MPI related questions to the  comp.parallel.mpi
  newsgroup.

-------------------------------------------------------------------------------

Question 4.2.  What to put in a posting about MPI

  Questions will probably deal with a certain MPI implementation, MPI
  document clarifications, `how-to' type questions, etc.  Use a clear,
  detailed Subject line.  Don't put things like `MPI', `doesn't work',
  `help' or `question' in it --- we already knew that !  Save the space
  for the subject the question relates to, a fragment of the error
  message, summary of the unusual program behaviour, etc.

  Put a summary paragraph at the top of your posting.

  Remember that you should not post email sent to you personally without
  the sender's permission.

  For problems with a specific implementation, give full details of the
  problem, including

  * Enough information about the implementation you are using including
    the version number if one and say where you got it.

  * The exact and complete text of any error messages printed.

  * Exactly what behaviour you were expecting, and exactly what behaviour
    you observed.  A transcript of an example session is a good way of
    showing this.

  * Details of what hardware you're running on, if it seems appropriate.

  You are in little danger of making your posting too long unless you
  include large chunks of source code or uuencoded files, so err on the
  side of giving too much information.

  Source:  Modified from the Linux FAQ

===============================================================================

Section 5.  Administrative information and acknowledgements

 Q5.1        Feedback is invited
 Q5.2        Formats in which this FAQ is available
 Q5.3        Where can I obtain a copy of this FAQ
 Q5.4        Authorship and acknowledgements
 Q5.5        Disclaimer and Copyright

-------------------------------------------------------------------------------

Question 5.1.  Feedback is invited

  Please send me your comments on this FAQ.

  I accept submissions for the FAQ in any format;  All contributions
  comments and corrections are gratefully received.

  Please send them to doss@ERC.MsState.Edu (Nathan Doss).

-------------------------------------------------------------------------------

Question 5.2.  Formats in which this FAQ is available

  This document is available as ASCII text, an Emacs Info document and
  PostScript.  It is also available on the world wide web (WWW) at
  http://www.erc.msstate.edu/mpi/mpi-faq.html or at
  http://www.cis.ohio-state.edu/hypertext/faq/usenet/mpi-faq/faq.html .

  The ASCII, Emacs Info, and HTML versions are generated automatically by
  a Perl script which takes as input a file in the Bizarre Format with No
  Name.  Mosaic is used to create the postscript version from the HTML
  version.

  The output files mpi-faq.ascii, .info, .html, and .ps and a tarfile
  mpi-faq.source.tar.gz, containing the BFNN source and Perl script
  converter, are available in  ftp://ftp.erc.msstate.edu/pub/mpi/faq .

-------------------------------------------------------------------------------

Question 5.3.  Where can I obtain a copy of this FAQ

  In addition to finding it in those places listed in  Q5.2 `Formats in
  which this FAQ is available' , the ascii version is posted monthly  to
  comp.parallel.mpi ,  news.answers , and  comp.answers .

  The ascii version can also be obtained through anonymous ftp from
  ftp://rtfm.mit.edu/pub/usenet/news.answers/mpi-faq or those without FTP
  access can send e-mail to mail-server@rtfm.mit.edu with "send
  usenet/news.answers/mpi-faq"  in the message body.

-------------------------------------------------------------------------------

Question 5.4.  Authorship and acknowledgements

  This FAQ was compiled by Nathan Doss (doss@ERC.MsState.Edu), with
  assistance and comments from others.

  Thanks to the MPI Forum and those who gave feedback about the MPI
  document for giving us something to write about !

  The format of this FAQ, the wording of the Disclaimer and Copyright, and
  the original Perl conversions scripts were borrowed (with permission)
  from Ian Jackson ijackson@nyx.cs.du.edu who maintains the "Linux
  Frequently Asked Questions with Answers" document.

-------------------------------------------------------------------------------

Question 5.5.  Disclaimer and Copyright

  Note that this document is provided as is.  The information in it is
  *not* warranted to be correct; you use it at your own risk.

  MPI Frequently Asked Questions is Copyright 1994 by Mississippi State
  University.  It may be reproduced and distributed in whole or in part,
  subject to the following conditions:

  * This copyright and permission notice must be retained on all complete
    or partial copies.

  * Any translation or derivative work must be approved by me before
    distribution.

  * If you distribute MPI Frequently Asked Questions in part, instructions
    for obtaining the complete version of this manual must be included,
    and a means for obtaining a complete version free or at cost price
    provided.

  Exceptions to these rules may be granted, and I shall be happy to answer
  any questions about this copyright --- write to Nathan Doss, P.O. Box
  6176,  Engineering Research Center ,  Mississippi State , MS 39762 or
  email doss@ERC.MsState.Edu.  These restrictions are here to protect the
  contributors, not to restrict you as educators and learners.

===============================================================================
--
Nathan Doss                  doss@ERC.MsState.Edu
                     PGP Key Available on Request         
