MAINFRAME > Administrator > Performance

Optimize Mainframe Virtualization

Tips to overcome 4 key performance-management challenges

Tips to overcome 4 key performance-management challenges

Performance management in a mainframe environment is an important part of the IT department. The requirements for mainframe performance management and distributed performance management differ in many ways that may not be intuitive for installations new to mainframe, or of the requirements when adding Linux*.

Unlike distributed platforms—where specific applications, for the most part, don’t impact other applications—every application has the potential for impacting others in a shared-resource, fully virtualized environment where all resources, including processors and real memory, are shared. The concept of shared resources is very different in a z/VM* environment than in other virtualization options. With a history of almost 40 years, z/VM has been designed to over commit resources often with 30 or more Linux servers sharing one IFL, with a real storage overcommitment of three-to-one or more. Managing this overcommitment is a challenge that doesn’t exist in nonvirtualized, nonshared environments.

Four areas of performance management should be addressed to optimize mainframe virtualization capabilities. Addressing these challenges ensures target service levels are met and helps manage hardware costs. In distributed environments, adding hardware is often less expensive than the cost of analysis, which leads to server sprawl with server resource utilization at less than 10 percent. Increasing server utilization requires management and, with z/VM, many systems today boast utilizations of more than 90 percent while still meeting the required service levels. To do this, management in these forms is required:

»Performance analysis and tuning: to solve current performance issues
»Capacity planning: to understand growth and plan for capital expenditures and avoid future performance issues
»Chargeback and accounting: to ensure costs are understood and managed
»Operational alerts: to detect performance issues early and allow for corrective action


Many challenges to functional performance management aren’t intuitive to companies that haven’t yet faced the issues. Assumptions may be made that —business as usual” may be maintained, either from a distributed perspective or a z/OS* perspective. However, the technology to provide performance management for an environment of Linux servers operating under z/VM has requirements and needs planning and validation. For example, installations often have specific agents to detect operational problems. In a distributed environment where resource utilization may be 10 percent, an agent that uses an additional 5 percent isn’t an issue. However, put 20 servers on one IFL and that single agent has now consumed the IFL.

Performance management is more critical in a shared-resource environment running many servers and applications than in a nonshared dedicated-server environment where the only application being impacted is the one on the server. Linux on System z* installations must face the instrumentation and performance-management challenges of the virtualized environment to take full advantage of its power.

1. Performance Analysis and Tuning

The objective of performance analysis and tuning is to resolve performance problems when they occur. Performance analysis and tuning in the mainframe environment requires an understanding of shared resources, of many different subsystems and how to detect bottlenecks in complex environments. Problems may occur in many different areas, from the DASD system to storage, to the network itself. To assume problems won’t happen will follow Murphy’s law to the extreme: they’ll happen at the worst possible time.

It’s not just the z/VM system resources requiring performance analysis, but also inside Linux when processes are looping, when applications are consuming more-than-normal resources or when applications have storage cancers (i.e., they keep acquiring real storage but never release it). Swap-space activity and other file-system activity will need analysis as well, and all without using expensive agent technology that consumes additional resources.

Barton Robinson, chief architect with Velocity Software, has spent much of the last 30 years focusing on performance management and developing tools and technology to support infrastructure requirements.

comments powered by Disqus



2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters