World Library  
Flag as Inappropriate
Email this Article

Parallel programming language

Article Id: WHEBN0017773292
Reproduction Date:

Title: Parallel programming language  
Author: World Heritage Encyclopedia
Language: English
Subject: Integrated circuit
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Parallel programming language

In computer software, a parallel programming model is a model for writing parallel programs which can be compiled and executed. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently they execute. The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.

Consensus around each programming model is important as it enables software expressed within it to be transportable between different architectures. For sequential programming architectures, the von Neumann model has facilitated this, as it provides an efficient bridge between hardware and software, meaning that high-level languages can be efficiently compiled to it and it can be efficiently implemented in hardware.[1]

Main classifications and paradigms

Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.

Process interaction

Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but it can also be implicit.

Shared memory

Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors.In this model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks, semaphores and monitors to control concurrent access. Shared memory can be emulated on distributed-memory systems but non-uniform memory access (NUMA) times can come in to play. Sometimes memory is also shared between different section of code of the same program. E.g. A For loop can create threads for each iteration which updates a variable in parallel.

Message passing

Message passing is a concept from computer science that is used extensively in the design and implementation of modern software applications; it is key to some models of concurrency and object-oriented programming. In a message passing model, parallel tasks exchange data through passing messages to one another. These communications can be asynchronous or synchronous. The Communicating Sequential Processes (CSP) formalisation of message-passing employed communication channels to 'connect' processes, and led to a number of important languages such as Joyce, occam and Erlang.

Implicit

In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with domain-specific languages where the concurrency within a problem can be more prescribed.

Problem decomposition

Flynn's taxonomy (multiprogramming context)
  Single instruction Multiple instruction Single program Multiple program
Single data SISD MISD
Multiple data SIMD MIMD SPMD MPMD

A parallel program is composed of simultaneously executing processes. Problem decomposition relates to the way in which these processes are formulated. This classification may also be referred to as algorithmic skeletons or parallel programming paradigms.

Task parallelism

A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as MIMD/MPMD or MISD.

Data parallelism

A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will divided between memories and worked on locally. Data parallelism is usually classified as SIMD/SPMD.

Idealised Parallel Systems

The systems are categorized into two categories. The systems discussed in the first category were characterized by the isolation of the abstract design space seen by the programmer from the parallel, distributed implementation. In this, all processes are presented with equal access to some kind of shared memory space. In its loosest form, any process may attempt to access any item at any time. The second category considers machines in which the two levels are closer together and in particular, those in which the programmer's world includes explicit parallelism.This category discards shared memory based cooperation in favour of some form of explicit message passing.

Example parallel programming models

See also

References

  1. ^ Leslie G. Valiant, A bridging model for parallel computation, Commun. ACM, volume 33, issue 8, August, 1990, pages 103--111

Further reading

  • H. Shan and J. Pal Singh. A comparison of MPI, SHMEM, and Cache-Coherent Shared Address Space Programming Models on a Tightly-Coupled Multiprocessor. International Journal of Parallel Programming, 29(3), 2001.
  • H. Shan and J. Pal Singh. Comparison of Three Programming Models for Adaptive Applications on the Origin 2000. Journal of Parallel and Distributed Computing, 62:241–266, 2002.
  • About structured parallel programming: Davide Pasetto and Marco Vanneschi. Machine independent Analytical models for cost evaluation of template--based programs, University of Pisa, 1996
  • J. Darlinton, M. Ghanem, H. W. To (1993), "Structured Parallel Programming", In Programming Models for Massively Parallel Computers. IEEE Computer Society Press. 1993 
  • Murray I. Cole. Algorithmic Skeletons:Structured Management of Parallel Computation

External links

  • Developing Parallel Programs — A Discussion of Popular Models (Oracle White Paper September 2010)
  • Designing and Building Parallel Programs (Section 1.3, 'A Parallel Programming Model')
  • Introduction to Parallel Computing (Section 'Parallel Programming Models')
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.