The Right Tools for the Systems Optimization Job
Consolidation. Optimization. Rationalization. Concentration. Integration. These, and many other words, have been used to describe the wave of activity that has swept the IT industry and most businesses during the last three to five years. Vast amounts of IT dollars and human resources have been expended during this time to correct many of the distributed development and deployment decisions made five to 10 years ago.
Today, many CIOs no longer simply ask, "Why do we have so many servers?" or "Why is it costing me so much to process and maintain information in a distributed environment?" IT cost is now, and was then, a major driver in most decisions, and the appeal of drastically reducing the unit cost of processing through the use of apparently inexpensive servers was irresistible.
What many organizations failed to consider, in their zeal to cut hardware costs and ultimately the bottom-line for IT, were the effects of growing these initially low-cost server environments into large, complex networks of independent systems. Each came with its own set of requirements in terms of design, procurement, implementation, maintenance, management and finally replacement. IT professionals, supporting large distributed UNIX* and Wintel server infrastructures, have seen hardware costs as a shrinking piece of the total IT cost-pie. Conversely, elements such as software and people costs continue to rise.
CIOs are aware of this growing IT complexity and cost and are interested in learning new ways to deal with them. They want to know about solutions that others are implementing to help reduce complexities and run more efficient IT operations. They want to understand the newest approaches of the trade, including on demand computing, virtualization, high-density hardware packaging, autonomic systems, grid and so on. Perhaps more importantly, they want solutions, specifically tools and techniques, which will not only reduce cost, but also position their IT infrastructure to more flexibly respond to rapidly changing business environments.
Let's examine some of the tools and techniques (see Figure 1) that have recently surfaced in the systems-optimization arena and how they can positively affect IT costs and complexities.
One can categorize the available methods, tools and techniques for systems optimization in many different ways. I've grouped them into four discrete but important areas:
Development and deployment standardization
Manual best-fit techniques
The entries within each of these areas identify the major tools, techniques or approaches that many IT organizations have been using to help reduce IT cost and complexity within their enterprises and server infrastructures. I'll comment on the major categories and successes based on server optimization studies conducted by IBM's IT Server Infrastructure Optimization Team.
One of the greatest impacts on managing long-term IT costs and server infrastructure complexity has less to do with the hardware technologies an enterprise chooses than with an enterprises's choice of development and deployment standards.
Search our new 2013 Buyer's Guide.
Web Exclusive | Implement these techniques to improve data-center resiliency.
None | The most exciting POWER6 enhancement, live partition mobility, allows one to migrate a running LPAR to another physical box and is designed to move running partitions from one POWER6 processor-based server to another without any application downtime whatsoever.