Sunday, February 10, 2013

Some Terms in Parallel Computing

SIMD (Single Instruction, Multiple Data) - A computer with multiple processors each of which performs the same operation on different data streams simultaneously.

MIMD (Multiple Instructions, Multiple Data) - Each processor in a multiprocessor system performs a different operation on a separate data stream simultaneously.

SPMD (Single program, Multiple Data) - A more restrictive form of MIMD where each of the different operations are of the same program.

See Flynn's taxonomy for more information on the above.

Communication Bandwidth - The maximum amount of data that can be transmitted in a unit of time.

Communication Latency - The amount of time from when a piece of data is sent, to when it is received by the target.

Message Passing - A model of interaction among processors in a multiprocessor system. A message is composed by instructions on one processor and sent to another processor through the interconnecting bus(es).

Shared Memory  - A model of interaction where the the separate processors can read and write on the same memory space, and therefore access each others data values. It could be physical where only one memory is available to all the processors, or logical, in the case where each processor has its own memory, and a request to access a non-local  memory address is converted to some form of inter-processor communication.

Aggregate Function - A model of interaction where a group of processors act together. An example is barrier synchronization, where each processor outputs a data value on reaching a barrier (a particular point in the computation process) and the communication hardware returns a value to each processor that is a function of all the values received from the processors.

SMP (Symmetric Multiprocessors) - A multiprocessor system with two or more identical processors and a single shared memory, under control of a single OS. It can be thought of as MIMD with shared memory.

Processor Affinity - The OS scheduler keeps a process on the same processor in a multiprocessor system to take advantage of locally cached data.

Shared Everything - All data structures are in shared memory.

Shared Something - Only a subset of the data structures (the ones that need to be shared) are in shared memory.

Atomicity - The concept of an uninterruptible and indivisible operation (sequence of instructions) on a data object.

Cache Coherence - maintaining identical caches of shared memory. A change on one caches should be propagated to other caches.

Mutual Exclusion - utmost one processor or process is updating a given shared object at a given time.

Gang Scheduling - Only related processes or threads are running simultaneously in a multiprocessor system at a given instance. This could be processes of one program, or situation where the input of one process depends on the output of another running at the same time.



No comments:

Post a Comment