It's All Relative
Note: This is the third installment in a series of related articles.
In my benchmark series of articles, I've explained the Standard Performance Evaluation Corporation (SPEC) and Transaction Processing Performance Council (TPC) benchmarks. As you'll recall, SPEC benchmarks measure the performance of the microprocessor, memory architecture and compiler of the system under test. TPC benchmarks measure the entire system, including the processor, I/O subsystem, network, database, compilers and OS.
In this installment, I'll explain the IBM* Relative Performance benchmark, rPerf. Obtaining detailed information about the rPerf benchmark has proven difficult. In fact, very little documentation was found on this topic, therefore quantitative details of what makes up an rPerf aren't disclosed.
What is rPerf?
The rPerf benchmark was created in 2001 to replace IBM's previous benchmark, Relative Online Transaction Processing (ROLTP). rPerf was created as the metric for comparative performance of IBM eServer pSeries* systems. The rPerf benchmark is derived and used by IBM and is an estimate of commercial processing performance (OLTP-type transactions) relative to other pSeries systems. It's derived from a proprietary IBM analytical model, which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. (Note: rPerf benchmark results aren't intended to represent any specific public benchmark results and inferences shouldn't be made.) The model simulates some of the system operations such as CPU, cache and maximum memory allowable. However, the model doesn't simulate disk or network I/O interactions. Therefore, a generalization based on the aforementioned definitions would place the rPerf benchmark closer to a SPEC than a TPC benchmark.
rPerf estimates are calculated based on systems with maximum1 memory, the latest levels of the AIX* 5L OS and other pertinent software at the time of testing. IBM documentation specifies that actual performance will vary based on application and configuration details. IBM used the pSeries 640 as the baseline reference system, which has a value of 1.0. IBM documentation also specifies that although rPerf may be used to compare estimated IBM AIX commercial-processing performance, actual system performance may vary and is dependent on many factors, including system-hardware configuration, software design and configuration.
IBM documentation also points out that all performance estimates are provided "as is," with no warranties or guarantees either expressed or implied. Additionally, the documentation points out that buyers should consult other sources of information, including system benchmarks and application-sizing guides to evaluate the performance of a system under consideration of purchase.
System performance can be defined by the rate in which transactions are completed or by the amount of time it takes to complete a transaction. Also, a server's performance contains a range of possible values, depending on what features and functions are exploited by the OS, middleware and application.
Since rPerf was meant to show the maximum possible performance of a pSeries system for an OLTP-type application, it's obvious that some applications could show a larger relative performance than others. rPerf exploits all of the latest features and functions available at the time the value is published. This means that rPerf value typically represents the maximum possible performance for a given system. Like all benchmark results, the further the compared system is from the tested system, the larger the variability in the predicted results.
comments powered by