Privacy Policy Cookie Policy Terms and Conditions Benchmark (computing) - Wikipedia, the free encyclopedia

Benchmark (computing)

From Wikipedia, the free encyclopedia

This article is about the use of benchmarks in computing, for other uses see benchmark.

In computing, a benchmark is the result of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, by running a number of standard tests and trials against it. The term, benchmark, is also commonly used for specially-designed benchmarking programs themselves. Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems.

Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures.

Contents

[edit] Purpose

As computer architecture advanced, it became more and more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that could be performed on different systems, allowing the results from these tests to be compared across different architectures. For example, while Intel Pentium 4 processors generally operate at a higher clock frequency than AMD Athlon XP processors, this does not necessarily translate to more computational power. In other words a 'slower' AMD processor, with regards to clock frequency, can perform as well on benchmark tests as an Intel processor operating at a higher frequency.

Benchmarks are designed to mimic a particular type of workload on a component or system. "Synthetic" benchmarks do this by specially-created programs that impose the workload on the component. "Application" benchmarks, instead, run actual real-world programs on the system. Whilst application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks still have their use for testing out individual components, like a hard disk or networking device.

Benchmarks are particularly important in semiconductor microprocessor design, giving processor architects the ability to measure and make tradeoffs in microarchitectural decisions. For example, if a benchmark extracts the key algorithms of an application, it will contain the performance-sensitive aspects of that application. Running this much smaller "snippet" on a cycle-accurate simulator, can give clues on how to improve performance.

Prior to 2000, computer and microprocessor architects used SPEC to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact.

Computer manufacturers have a long history of trying to set up their systems to give unrealistically high performance on benchmark tests that is not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a mathematically-equivalent operation that was much faster. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when RISC and VLIW architectures emphasized the importance of compiler technology as it related to performance. Benchmarks are now regularly used by compiler companies to improve not only their own benchmark scores, but real application performance.

Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing. See this article for an excellent example of how application benchmarks can differ significantly from synthetic benchmarks.

Users are recommended to take benchmarks, particularly those provided by manufacturers themselves, with ample quantities of salt unless the benchmarks are certified and relate directly to a recognizable application workload. Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port, to a specific processor or computer system. If performance is really critical, the only benchmark that matters is the actual workload that the system is to be used for. If that is not possible, benchmarks that resemble real workloads as closely as possible should be used, and even then used with skepticism unless independently certified. It is quite possible for system A to outperform system B when running program "furble" on workload X (the workload in the benchmark), and the order to be reversed with the same program on your own workload.

[edit] Challenges

Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions. Interpretation of benchmarking data is also extraordinarily difficult. Here is a partial list of common challenges:

  • Vendors tend to tune their products specifically for industry-standard benchmarks. Use extreme caution in interpreting such results.
  • Benchmarks generally do not give any credit for any qualities of service aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. TPC Benchmark specifications partially address these concerns by specifying ACID property tests, database scalability rules, and service level requirements.
  • In general, benchmarks do not measure TCO. TPC Benchmark specifications partially address this concern by specifying that a price/performance metric must be reported in addition to a raw performance metric, using a simplified TCO formula.
  • Benchmarks seldom measure real world performance of mixed workloads — running multiple applications concurrently in a full, multi-department/multi-application business context. For example, IBM's mainframe servers (System z9) excel at mixed workload, but industry-standard benchmarks don't tend to measure the strong I/O and large/fast memory design such servers require. (Most other server architectures dictate fixed function/single purpose deployments, e.g. "database servers" and "Web application servers" and "file servers," and measure only that. The better question is, "What more computing infrastructure would I need to fully support all this extra workload?")
  • Vendor benchmarks tend to ignore requirements for development, test, and disaster recovery computing capacity. Vendors only like to report what might be narrowly required for production capacity in order to make their initial acquisition price seem as low as possible.
  • Benchmarks are having trouble adapting to widely distributed servers, particularly those with extra sensitivity to network topologies. The emergence of grid computing, in particular, complicates benchmarking since some workloads are "grid friendly," while others are not.
  • Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed SLAs. Benchmarks tend to emphasize mean scores (IT perspective) rather than low standard deviations (user perspective).
  • Many server architectures degrade dramatically at high (near 100%) levels of utilization — "fall off a cliff" — and benchmarks should (but often don't) take that factor into account. Vendors, in particular, tend to publish server benchmarks at continuous ~80% utilization — a totally unreal situation — and do not document what happens to the overall system when/if demand spikes beyond that level.

[edit] Types of benchmarks

  1. Real program
    • word processing software
    • tool software of CDA
    • user's application software (MIS)
  2. Kernel
    • contains key codes
    • normally abstracted from actual program
    • popular kernel: Livermore loop
    • linpack benchmark (contains basic linear algebra subroutine written in FORTRAN language)
    • results are represented in MFLOPS
  3. Toy Benchmark/ micro-benchmark
    • user can program it and use it to test computer's basic components
    • automatic detection of computer's hardware parameters like number of registers, cache size, memory latency
  4. Synthetic Benchmark
    • Procedure for programming synthetic Bench mark
      • take statistics of all type of operations from plenty of application programs
      • get proportion of each operation
      • write a program based on the proportion above
    • Types of Synthetic Benchmark are:
      • Whetstone
      • Dhrystone
    • Its results are represented in KWIPS (kilo whetstone instructions per second). It is not suitable for measuring pipeline computers.
  5. I/O benchmarks
  6. Parallel benchmarks: used on machines with multiple processors or systems consisting of multiple machines.

[edit] Common benchmarks

[edit] Industry Standard (audited and verifiable)

[edit] Others

[edit] Open source benchmarks

  • Dhrystone: integer arithmetic performance
  • Fhourstones: an integer benchmark
  • Whetstone: floating-point arithmetic performance
  • ApFloat: floating point
  • Linpack / LAPACK
  • GliBench: a Gui based benchmarking tool to check CPU and hard disk performance.
  • MemPerf: memory bandwidth
  • LLCBench: a group of benchmark for cache, MPI,etc.
  • LMbench: a suite of simple, portable benchmarks for OS and some CPU / Memory parameters
  • Calibrator: a small cache-memory and TLB Calibration Tool written in C. Results include cache size,linesize, access/miss latency, TLB entries, pagesize, miss penalty, memory access latency. gnuplot scripts are also generated to display results graphically.
  • X-RayAutomatic Measurement of Memory Hierarchy Parameters
  • nbench: Memory, integer and floating point comparison with AMD K6 233MHz ported from BYTE Magazine's BYTEmark benchmark program.
  • Ubench: Unix Benchmark Utility for testing CPU(s) and memory.
  • NAS parallel benchmarks
  • Povray: 3D render
  • SPLASH:Stanford Parallel Applications for Shared Memory (SPLASH)
  • Iozone file I/O a filesystem benchmark tool. The benchmark generates and measures a variety of file operations.
  • Bonnie++: File I/O
  • netperf : network throughput and latency benchmark
  • GENESIS distributed memory benchmark suite
  • HINT: It ranks a computer system as a whole.
  • Himeno Benchmark
  • STREAM : measures sustainable memory bandwidth the corresponding computation rate for simple vector kernels.
  • SKaMPIa suite of tests designed to measure the performance of MPI.

[edit] Microsoft Windows benchmarks

[edit] Books

Jim Gray (Editor), The Benchmark Handbook for Database and Transaction Systems (2nd Edition), Morgan Kaufmann, 1993, ISBN 1-55860-292-5

[edit] See also

  • Benchmark Generation Modeling Language
  • Benchmarking (business perspective)

[edit] External links

Static Wikipedia 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -