Privacy Policy Cookie Policy Terms and Conditions Amdahl's law - Wikipedia, the free encyclopedia

Amdahl's law

From Wikipedia, the free encyclopedia

The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 0.5 portion of the program is sequential, the theoretical maximum speedup using parallel computing would be 2 as shown in the diagram no matter how many processors are used.  i.e. (1/(0.5+(1-0.5)/N)) when N is very big
Enlarge
The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 0.5 portion of the program is sequential, the theoretical maximum speedup using parallel computing would be 2 as shown in the diagram no matter how many processors are used. i.e. (1/(0.5+(1-0.5)/N)) when N is very big

Amdahl's law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

Contents

[edit] Description

Amdahl's law can be interpreted more technically, but in simplest terms it is the algorithm that decides the speedup not the number of processors. You eventually reach a place where you can not parallelise the algorithm any more.

Amdahl's law is a demonstration of the law of diminishing returns: while one could speed up part of a computer a hundred-fold or more, if the improvement only affects 12% of the overall task, the best the speedup could possibly be is \frac{1}{1 - 0.12} = 1.136 times faster.

Assume that a task has two independent parts, A and B.  B takes roughly 25% of the time of the whole computation.  By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little.  In contrast, one may need to perform less work to make part A be twice as fast.  This will make the computation much faster than by optimizing part B, even though B got a bigger speed-up, (5x versus 2x).
Assume that a task has two independent parts, A and B. B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A be twice as fast. This will make the computation much faster than by optimizing part B, even though B got a bigger speed-up, (5x versus 2x).

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speed up 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2). Amdahl's law states that the overall speedup of applying the improvement will be

\frac{1}{(1 - P) + \frac{P}{S}}.

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, (which is 1 − P), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Here's another example. We are given a task which is split up into four parts: P1 = .11 or 11%, P2 = .18 or 18%, P3 = .23 or 23%, P4 = .48 or 48%, which add up to 100%. Then we say P1 is not sped up, so S1 = 1 or 100%, P2 is sped up 5x, so S2 = 5 or 500%, P3 is sped up 20x, so S3 = 20 or 2000%, and P4 is sped up 1.6x, so S4 = 1.6 or 160%. By using the formula \frac{P1}{S1} + \frac{P2}{S2} + \frac{P3}{S3} + \frac{P4}{S4}, we find the running time is {\frac{.11}{1} + \frac{.18}{5} + \frac{.23}{20} + \frac{.48}{1.6}} = .4575 or a little less than ½ the original running time which we know is 1. Therefore the overall speed boost is \frac{1}{.4575} = 2.186 or a little more than double the original speed using the formula \frac{1}{\frac{P1}{S1} + \frac{P2}{S2} + \frac{P3}{S3} + \frac{P4}{S4}}. Notice how the 20x and 5x speedup don't have much effect on the overall speed boost and running time when over half of the task is only sped up 1x, (i.e. not sped up), or 1.6x.

[edit] Parallelisation

In the special case of parallelisation, Amdahl's law states that if F is the fraction of a calculation that is sequential (i.e. cannot benefit from parallelisation), and (1 − F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using N processors is

\frac{1}{F + (1-F)/N}.

In the limit, as N tends to infinity, the maximum speedup tends to 1/F. In practice, price/performance ratio falls rapidly as N is increased once (1 − F)/N is small compared to F.

As an example, if F is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very low values of F: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce F to the smallest possible value.

[edit] Limitations

According to Amdahl's law, the theoretical maximum speedup of using N processors would be N, namely linear speedup. However, it is not uncommon to observe more than N speedup on a machine with N processors in practice, namely super linear speedup. One possible reason is the effect of cache aggregatation. In parallel computers, not only does the number of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even the entire data set can fit into caches, dramatically reducing memory access time and producing an additional speedup beyond that arising from pure computation.

[edit] Amdahl's Rule Of Thumb

Amdahl's Rule Of Thumb is that 1 byte of memory and 1 byte per second of I/O are required for each instruction per second supported by a computer. This also goes by the title Amdahl's Other Law.

[edit] See also

[edit] References

  • Gene Amdahl, "Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967.

[edit] External links

Topics in Parallel Computing  v  d  e 
General High-performance computing
Theory SpeedupAmdahl's lawFlynn's TaxonomyCost EfficiencyGustafson's Law
Elements ProcessThreadFiberParallel Random Access Machine
Coordination MultiprocessingMultitaskingMemory coherencyCache coherencyBarrierSynchronizationDistributed computingGrid computing
Programming Programming modelImplicit parallelismExplicit parallelism
Hardware Computer clusterBeowulfSymmetric multiprocessingNon-Uniform Memory AccessCache only memory architectureAsymmetric multiprocessingSimultaneous multithreadingShared memoryDistributed memoryMassively parallel processingSuperscalar processingVector processingSupercomputer
Software Distributed shared memoryApplication checkpointing
Problems Embarrassingly parallelGrand Challenge

Static Wikipedia 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -