Ease-of-Management Tools for Virtualization
This new column, AIX* Corner Store, will be a regular feature in eServer Magazine, IBM edition for UNIX*. It's the place for AIX administrators to find tactical tips and not-so-widely-known tools.
When industry goals turn to an on demand philosophy, you must rely on and exploit technologies such as virtualization. The idea of having a computer infrastructure that responds to changes in business climate and evolving computing needs is as exciting as it is challenging. It's necessary to explore the underlying core foundation in terms of availability, responsiveness and durability. POWER5* technology and AIX v5.3 provide a basis on which to build on demand business, but you must address various aspects of the strategic platform to effectively meet those needs.
The facets of virtualization that must be utilized aren't only the dynamic resource allocation, but also provisioning--the installation and implementation of new servers or partitions as needed. Changing infrastructures must have powerful security protection that recognizes and validates new entities and protection from outside threats. Advanced ways to monitor and react to issues that arise in the overall enterprise must be deployed. The underlying servers, regardless of the technology, must adhere to certain best practices and strategies that will help ensure availability as the ecosystem evolves.
One of the key strategies that an IT department must address is its maintenance strategy. When availability requirements approach 24-7 and 5-9s (99.999 percent) uptime, it's imperative to develop a plan where you can perform routine and needed maintenance (firmware, software, OS) to maintain availability. It's tempting but dangerous to think that a 24-7 environment will not lend itself to downtime for maintenance. Firmware and OS cyclical fixes must be approached as a planning item just as highly critical required maintenance must be accommodated. Availability requirements must be addressed as workloads are combined.
An important decision when an environment is implemented is what applications will share machines and/or partitions. Sometimes applications well-suited to co-exist in terms of code compatibility and performance considerations have uptime stipulations that may warrant configuration concessions or even separation. One of the most obvious solutions to this dilemma is some type of failover, high-availability (HA) software. This may be in the form of High Availability Cluster Multi-Processing (HACMP) or some other purchased or homegrown solution for failover. Then, as maintenance becomes necessary, failover is initiated, maintenance is applied to the secondary non-active server, systems fail back over and after a certain period of burn-in, repeat the process. This necessitates a certain amount of time when the system runs without a backup server and carries an inherent risk to outage due to a failure during the time it's operating singularly. If you have the opportunity to implement a rolling HACMP in a partitioned environment, it's warranted for highly critical environments.
Another note, if you use a failover mechanism, it should be routinely tested and stringently maintained when environments change. When hardware or software is added, the failover scenario must be reviewed and tested. Test the failover before you need it.
comments powered by