In 1967 Gene Amdahl presented a way to calculate the maximum speedup that could be obtained by running a computer program in parallel, i.e. splitting the work the program needs to do across multiple cores in the computer. The idea is that there is some part of the program that can be made to run in parallel, but there is some part that must remain as serial work. If the total time that the program takes to run is normalized to 1 time period, then f is the fraction that the program uses a single core (running serially) and 1-f is the fraction of time that the program uses multiple cores (running in parallel).

scMParallelImg19aAmdahl

So in our effort to make a program run as quickly as possible, we need to

  • Parallelize some of the program
    • some must remain serial
  • f is the fraction of the calculation that is serial
  • 1-f is the fraction of the calculation that is parallel

In parallel computing, the speedup is (a bit roughly speaking) the ratio of the time that it takes to run a program on one core or processor, divided by the time it takes to run a program on multiple cores, P say.

$$S = \frac{T_{1}}{T_{P}}$$

Amdahl’s Law says that the maximum speedup that can be obtained by using P processors is:

$$S_{max} = \frac{1}{f + \frac{(1-f)}{P}}$$

As an example, if 25% of the calculation must remain serial the best speedup you can obtain is 4. This is when you have a huge, or ideally, infinite number of processors to do the parallel part of the work.

As a consequence of this, we need to parallelize as much of the program as possible to get the best advantage from multiple processors.