Global Address Space Programming Interface

Global Address Space Programming Interface (GPI) is an application programming interface (API) for the development of scalable, asynchronous and fault tolerant parallel applications.[2] It is an implementation of the partitioned global address space programming model.[3]

GPI
Developer(s)Fraunhofer ITWM
Stable release
GPI-2 1.3.0 / May 3, 2016; 8 years ago (2016-05-03)[1]
Operating systemLinux
TypeApplication programming interface
Websitewww.itwm.fraunhofer.de/en.html Edit this on Wikidata

History

edit

GPI is developed by the Fraunhofer Institute for Industrial Mathematics (ITWM) since 2005 and was initially known as FVM (Fraunhofer Virtual Machine).

In 2009 the name changed to Global Address Programming Interface or GPI.

In 2011, Fraunhofer ITWM and its partners such as Fraunhofer SCAI, TUD, T-Systems SfR, DLR, KIT, FZJ, DWD and Scapos have initiated and launched the GASPI[4] project to define a novel specification for an API (GASPI based on GPI) and to make this novel specification a reliable, scalable and universal tool for the HPC community. GPI-2 is the first open source implementation of this standard.

The software is freely available to application developers and researchers, licenses for commercial use are available through Scapos AG.[3]

GPI has completely replaced MPI at Fraunhofer ITWM, where all products and research are based on the new GPI-2.

Concepts

edit

Segments

edit
 
GPI Architecture

Modern hardware typically involves a hierarchy of memory with respect to the bandwidth and latency of read and write accesses. Within that hierarchy are non-uniform memory access (NUMA) partitions, solid state devices (SSDs), graphical processing unit (GPU) memory or many integrated cores (MIC) memory. The memory segments are supposed to map this variety of hardware layers to the software layer. In the spirit of the PGAS approach, these GPI segments may be globally accessible from every thread of every GPI process. GPI segments can also be used to leverage different memory models within a single application or to even run different applications.

Groups

edit

A group is a subset of all ranks. The group members have common collective operations. A collective operation on a group is then restricted to the ranks forming that group. There is an initial group (GASPI_GROUP_ALL) from which all ranks are members. Forming a group involves 3 steps: creation, addition and a commit. These operations must be performed by all ranks forming the group. The creation is performed using gaspi_group_create. If this operation is successful, ranks can be added to the created group using gaspi_group_add. To be able to use the created group, all ranks added to it must commit to the group. This is performed using gaspi_group_commit, a collective operation between the ranks in the group.

One-sided communication

edit

One-sided asynchronous communication is the basic communication mechanism provided by GPI-2. The one-sided communication comes in two flavors. There are read and write operations (single or in a list) from and into allocated segments. Moreover, the write operations are extended with notifications to enable remote completion events which a remote rank can react on. One-sided operations are non-blocking and asynchronous, allowing the program to continue its execution along with the data transfer.

The mechanisms for communication in GPI-2 are the following:

gaspi_write
gaspi_write_list
gaspi_read
gaspi_read_list
gaspi_wait
gaspi_notify
gaspi_write_notify
gaspi_write_list_notify
gaspi_notify_waitsome
gaspi_notify_reset

Queues

edit

There is the possibility to use different queues for communication requests where each request can be submitted to one of the queues. These queues allow more scalability and can be used as channels for different types of requests where similar types of requests are queued and then get synchronised together but independently from the other ones (separation of concerns).

Global atomics

edit

GPI-2 provides atomic operations such that variables can be manipulated atomically. There are two basic atomic operations: fetch_and_add and compare_and_swap. The values can be used as global shared variables and to synchronise processes or events.

Timeouts

edit

Failure tolerant parallel programs require non-blocking communication calls. GPI-2 provides a timeout mechanism for all potentially blocking procedures. Timeouts for procedures are specified in milliseconds. For instance, GASPI_BLOCK is a pre-defined timeout value which blocks the procedure call until completion. GASPI_TEST is another predefined timeout value which blocks the procedure for the shortest time possible, i. e. the time in which the procedure call processes an atomic portion of its work.

Products using GPI

edit
  • Based on GPI, Fraunhofer has also developed GPI-Space, a distributed run-time system for parallel programming.
  • Pre-Stack PRO[5] is a commercially available software toolkit for pre-stack data analysis that was initially developed by Fraunhofer and is now further developed and sold by Sharp Reflections, a spin-off between Fraunhofer and some of its industry partners.

See also

edit

References

edit
  1. ^ "New GPI-2 release (V1.3.0) | GPI-2". Archived from the original on 2017-09-26. Retrieved 2017-09-26.
  2. ^ "GPI-2 project". Archived from the original on 2014-04-26. Retrieved 2014-04-25.
  3. ^ a b "Scapos Parallel Software products". Archived from the original on 2014-04-26. Retrieved 2014-04-25.
  4. ^ "GASPI Project". Archived from the original on 2014-07-14. Retrieved 2014-07-08.
  5. ^ "Sharp Reflections". Archived from the original on 2014-07-15. Retrieved 2014-07-08.
edit