Privacy Policy Cookie Policy Terms and Conditions FLOPS - Wikipedia, the free encyclopedia

FLOPS

From Wikipedia, the free encyclopedia

This article is about the computing benchmark. For the poker term, see flop (poker).

In computing, FLOPS (or flops) is an acronym meaning FLoating point Operations Per Second. This is used as a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations. (Compare to MIPS -- million instructions per second.) One should speak in the singular of a FLOPS and not of a FLOP, although the latter is frequently encountered. The final S stands for second and does not indicate a plural.

Alternatively, the singular FLOP (or flop) is used as an abbreviation for "floating-point operation", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.

Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than the FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as megaFLOPS (MFLOPS, 106 FLOPS), gigaFLOPS (GFLOPS, 109 FLOPS), teraFLOPS (TFLOPS, 1012 FLOPS), petaFLOPS (PFLOPS, 1015 FLOPS) and exaFLOPS (EFLOPS, 1018 FLOPS).

Contents

[edit] The performance spectrum

A relatively cheap but modern desktop computer using, for example, a Pentium 4 or Athlon 64 CPU, typically runs at a clock frequency in excess of 2 GHz and provides computational performance in the range of a few GFLOPS but the progression is not linear and depend on performances of CPU caches. Even some video game consoles of the late 1990s and early 2000s, such as the Nintendo GameCube and Sega Dreamcast, had performance in excess of one GFLOPS (but see below).

The original supercomputer, the Cray-1, was set up at Los Alamos National Laboratory in 1976. The Cray-1 was capable of 80 MFLOPS (or, according to another source, 138–250 MFLOPS). In fewer than 30 years since then, the computational speed of supercomputers has jumped a millionfold.

According to the TOP500 list, the fastest computer in the world as of November 2006 was the IBM Blue Gene/L supercomputer, measuring a peak of 280.6 TFLOPS. This is close to twice the previous Blue Gene/L record of 136.8 TFLOPS, set when only half the machine was installed. Blue Gene (unveiled October 27, 2005) contains 131,072 processor cores, yet each of these cores are quite similar to those found in many mid-performance computers (PowerPC 440). Blue Gene/L is a joint project of the Lawrence Livermore National Laboratory and IBM.[1]

Cray inc. has announced that it will be updating the Oak Ridge Super Computer. The computer will be capable of a petaflop and is being advertised as 3 times as powerful as any other computer in the world. The upgrades will be completed by 2007 and the Super Computer is costing $200 million.[2]

In June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaflop, over three times faster than the Blue Gene/L. MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the TOP500 list. It has special-purpose pipelines for simulating molecular dynamics. MDGRAPE-3 houses 4,808 custom processors, 64 servers each with 256 dual-core processors, and 37 servers each containing 74 processors, for a total of 40,314 processor cores, compared to the 131,072 needed for the Blue Gene/L. MDGRAPE-3 is able to do many more computations with few chips because of its specialized architecture. The computer is a joint project between Riken, Hitachi, Intel, and NEC subsidiary SGI Japan.

Distributed computing uses the Internet to link personal computers to achieve a similar effect:

Pocket calculators are at the other end of the performance spectrum. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is experienced as instantaneous by a human operator, so a simple calculator could be said to operate at about 10 FLOPS.

Humans are even worse floating-point processors on the mathematical level. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division problem with 10 significant digits, that person would be calculating in the milliFLOPS range. Bear in mind, however, that a purely mathematical test will not truly measure a human's FLOPS, as a human is also processing thoughts, consciousness, smells, sounds, touch, sight and motor coordination.

[edit] FLOPS as a measure of performance

In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.

FLOPS in isolation are arguably not very useful as a benchmark for modern computers. There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.

For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.

Historically, the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s.

The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurement of "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. On that date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the Export Administration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted TeraFLOPS (WT).

[edit] FLOPS, GPUs, and game consoles

Very high FLOPS figures are often quoted for inexpensive computer video cards and game consoles.

For example, the Xbox 360 has been announced as having total floating point performance of around one TFLOPS, while the PS3 has been announced as having a theoretical 2.18 TFLOPS. By comparison, a high-end general-purpose PC would have a FLOPS rating of around ten GFLOPS, if the performance of its CPU alone was considered. The 1 TFLOPS for the Xbox 360 or 2 TFLOPS for the Playstation 3 ratings that were sometimes mentioned regarding the consoles would even appear to class them as supercomputers. These FLOPS figures should be treated with caution, as they are often the product of marketing. The game console figures are often based on total system performance (CPU + GPU). In the extreme case, the TFLOPS figure is primarily derived from the function of the single-purpose texture filtering unit of the GPU. This piece of logic is tasked with doing a weighted average of sometimes hundreds of pixels in a texture during a look-up (particularly when performing a quadrilinear anisotropically filtered fetch from a 3D texture). However, single-purpose hardware can never be included in an honest FLOPS figure.

Still, the programmable pixel pipelines of modern GPUs are capable of a theoretic peak performance that is an order of a magnitude higher than a CPU. An NVIDIA 7800 GTX 512 is capable of around 200 GFLOPS and the current (11/06) NVIDIA 8800 GTX is capable of sustaining 330 GFLOPS. ATI's latest X1900 architecture (2/06) has a claimed performance of 554 GFLOPS[8]. This is possible because 3D graphics operations are a classic example of a highly parallelizable problem which can easily be split between different execution units and pipelines, allowing a high speed gain to be obtained from scaling the number of logic gates while taking advantage of the fact that the cost-efficiency sweet spot of (number of transistors)*frequency currently lies at around 500 MHz. This has to do with the imperfection rate in the manufacturing process, which rises exponentially with frequency.

While CPUs dedicate a few transistors to run at very high frequency in order to process a single thread of execution very quickly, GPUs pack a great deal more transistors running at a low speed because they are designed to simultaneously process a large number of pixels with no requirement that each pixel be completed quickly. Moreover, GPUs are not designed to perform branch operations (IF statements which determine what will be executed based on the value of a piece of data) well. The circuits for this, in particular the circuits for predicting how a program will branch to ready data for it, consume an inordinant number of transistors on a CPU that could be used for FLOPs. Lastly, CPUs access data more unpredictably. This requires them to include an amount of on-chip memory called a cache for quick random access. This cache represents the majority of CPU transistors.

General purpose computing on GPUs is an emerging field which hopes to utilize the vast advantage in raw FLOPS, as well as memory bandwidth, of modern video cards. As an example, occlusion testing in games is often done by rasterizing a piece of geometry and detecting the number of pixels changed in the z buffer, a highly non-optimal technique considering floating point operations. A few applications can even take advantage of the texture fetch unit in computing averages in (1, 2, or 3 dimensional) sorted data for a further boost in performance.

In January 2006, ATI Technologies launched a graphics sub-system that put in excess of 1 TERAFLOPS within the reach of most home users. To give this achievement perspective, you need to consider that less than 9 years earlier, the US Department of Energy commissioned the world's first TERAFLOPS super computer, ASCI Red, consisting of more than 9,200 Pentium II chips. The original incarnation of this machine used Intel Pentium Pro processors, each clocked at 200 MHz. These were later upgraded to Pentium II OverDrive processors.

[edit] Cost of computing

  • 1997: about US$30,000 per GFLOPS; with two 16-Pentium-Pro–processor Beowulf cluster computers, [9]
  • 2000, May: $640 per GFLOPS, KLAT2, University of Kentucky
  • 2003, August: $82 per GFLOPS, KASY0, University of Kentucky
  • 2005: about $2.60 (300$/115 GFLOPS CPU only) per GFLOPS in the Xbox 360 in case Linux will be implemented as intended [10]
  • 2006, February: about $1 per GFLOPS in ATI PC add-in graphics card (X1900 architecture)

This trend toward low cost follows Moore's law.

[edit] Pop culture references

[edit] References

  1. ^ Blue Gene/L tops its own supercomputer record
  2. ^ Cray Signs $200 Million Contract to Deliver World's Largest Supercomputer to Oak Ridge
  3. ^ [1]
  4. ^ BOINCstates at SETI@Home
  5. ^ [2]
  6. ^ Internet PrimeNet Server Parallel Technology for the Great Internet Mersenne Prime Search
  7. ^ Einstein@Home - Server Status
  8. ^ ATI launches new flagship graphics chip family X1900
  9. ^ Loki and Hyglac
  10. ^ Linux on Xbox 360

[edit] External links

Static Wikipedia 2006 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr -
 
ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext -
fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz -
 
ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky -
la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn -
 
na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt -
qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl -
 
ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh -
yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -