Home / Definitions / Parallel Computing

Parallel Computing

Kaiti Norton
Last Updated May 24, 2021 8:04 am

Parallel computing is a type of computer structure in which multiple processes are executed at the same time. It is the opposite of serial computing, in which one task is broken down into a set of instructions that are processed individually in sequential order. Parallel computing is closely related to concurrent computing, but they are distinct concepts; with the former, all of the computational tasks are interrelated, while the latter deals with processes that are unrelated or significantly varied in nature.

Parallel computing was originally reserved for high-performance computing environments, but it has become the primary framework for all computer architectures. In fact, it’s rare to find a server, laptop computer, smartphone, or other modern device that does not contain a multi-core processor. This is because parallel computing makes all computer processes more efficient, thereby saving time and money. It also enables computers to solve exceptionally large and complex problems that would otherwise be impossible to process.

Types of parallel computing approaches

There are four main types of parallel computing:

  • Bit-level parallelism, which increases the amount of information the processor can handle, effectively decreasing the number of instructions that are required to complete a single task
  • Instruction-level parallelism, which executes multiple instruction sequences at the same time, either through hardware (dynamic parallelism) or software (static parallelism)
  • Task parallelism, which performs multiple simultaneous tasks on the same set of data
  • Superword level parallelism, which exploits parallelism of inline code through vectorization techniques

Types of parallel computing architecture

Parallel computers can be classified based on four types of architecture:

  • Multicore computing, in which multiple processing units (called cores) are housed on the same chip
  • Symmetric multiprocessing, in which a bus connects multiple separate but identical processing units
  • Distributed computing, in which processing units exist in separate computer networks but coordinate efforts via HTTP, RPC, or similar protocols
  • Massively parallel computing, in which a large volume of networked processing units (100+) are housed in a single computer, usually a supercomputer

 

Related Link

9 Parallel Processing Examples & Applications