MAINFRAME > Tips & Techniques > Systems Management

Don't Be Misled By MIPS


One of the most misused terms in IT has to be MIPS. It's supposed to stand for "millions of instructions per second," but many alternate meanings have been substituted:

  • Misleading indicator of processor speed
  • Meaningless indicator of processor speed
  • Marketing indicator of processor speed
  • Managements impression of processor speed

Jokes aside, management has a tendency to want one figure to represent a processor's capacity. And companies are spending large amounts of money based on a poorly understood indicator, for both software and hardware acquisitions.

Unfortunately, no one number describes capacity. Processor speed varies depending on many factors, including (but not restricted to):

  • Workload mix
  • Memory/cache sizes
  • I/O access density
  • Software levels (OS and application subsystems)
  • Hardware changes
  • Partitioning

Workload mix is the largest contributor to the variability of capacity figures. An online workload has more of a random access pattern than batch (sequential) processing. Online subsystems, by design, rarely access data in a sequential pattern; they constantly request new records from disk or (hopefully) from buffers.

This brings forward the next point. Memory and cache sizes can significantly improve the throughput of a processor. Online subsystems buffer data and manage it on a least recently used (LRU) basis. The theory is that recently referenced data may be accessed again. If there is a hit (data residing in buffers), then we bypass the expensive physical I/O. This also applies to the processor cache. Memory access isn't direct; the processor moves the data to a local cache and then accesses it from there. Frequent cache updates lead to a slower-running processor.

The more I/O an application does, the less processor throughput it sustains. This is due to interrupt processing, suspending the task until the I/O completes and re-dispatching the task after the I/O completes. Again, larger memory sizes can reduce the impact, if the application makes use of it.

As software enhancements come out, they can and do make use of differing techniques and can improve or degrade the expected processor capacity. An example is the conversion from 31-bit addressing to 64-bit.

 

Ted MacNeil is a capacity/performance analyst with more than 25 years in the IBM mainframe environment. Ted can be reached at tedmacneil@alumni.uwaterloo.ca.


comments powered by Disqus

Advertisement

Advertisement

2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Optimal Service Delivery

Overcome eight key challenges and reduce costs

MAINFRAME > TIPS & TECHNIQUES > SYSTEMS MANAGEMENT

An Accurate Benchmark Is Important

Advice for the Lazy Administrator

Steps you can take to avoid late nights and system frights.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters