AIX > Administrator > Virtualization

VIOS 101: I/O


In Part 1, I wrote an introduction to virtualization and then discussed CPU virtualization with PowerVM. In Part 2, I addressed memory technologies and in Part 3, we looked at network operations. In this final article, we’ll review options for virtualizing I/O.

What Are Our I/O Options?

Virtualization of I/O for LPARs comes in many flavors. Options include virtual SCSI (vSCSI), virtual Fibre Channel (VFC), virtual Optical (vtOpt), virtual Tape (vtTape) and shared storage pools (SSPs). VFC uses N-Port ID Virtualization (NPIV), which allows multiple Fibre Channel node port (N_Port) IDs to share a single physical N_Port, thus allowing multiple LPARs to share a single physical Fibre adapter port.

To provide a boot disk to an LPAR, it’s necessary to take up a slot for a card that connected the disk and that slot gets dedicated to the LPAR. As servers get larger and faster, more LPARs are being consolidated onto them and this is causing a significant increase in the amount of disk and the number of slots being consumed, just to boot the LPAR, never mind the actual data disks. vSCSI and NPIV (VFCs) provide the capability to have a VIO server own the adapters and the disks, and then that VIO server can carve those disks up and provide chunks of disks (or whole disks) to LPARs such that the client LPAR thinks it has a full disk that it can use for booting or data. As an example, most LPARs rootvgs are between 30 and 45 GB depending on how clean they keep their rootvg. The smallest disk now is around 146 GB. With a VIO server this 146 GB disk can easily be carved into three logical volumes, each of which could be a boot volume for a different LPAR. Even using two VIO servers for redundancy this is still a significant reduction in disks, PCI cards and I/O drawers.

Also note that a physical adapter can support both NPIV and vSCSI concurrently. Some people like to use vSCSI for boot disks and NPIV for data. They do this because of the requirement for MPIO (multipath I/O) drivers to be in the client LPAR for NPIV, which can make things tricky when upgrading the operating system and the MPIO drivers. Any of the three options (NPIV only, vSCSI only or both) works just fine as long as you pay attention to upgrade and other requirements.

vSCSI doesn’t support load balancing across virtual adapters in a client LPAR. With VFCs, device drivers such as SDD, SDDPCM or Atape must be installed in the client partition for the disk devices or tape devices. SDD or SDDPCM allow load balancing across virtual adapters. However, upgrading of these drivers requires special attention when you’re using SAN devices as boot disks for the operating system. MPIO for VFC devices in the AIX client partition doesn’t require any specific configuration and supports round robin, load balancing and failover modes. MPIO for vSCSI devices in the AIX client partition only supports failover mode. For any given vSCSI disk, a client partition will use a primary path to one VIO server and fail over to the secondary path to use the other VIO server.

Virtual SCSI

vSCSI allows the VIO server to carve up the storage on a Fibre adapter and allocate it to various client LPARs. The vSCSI protocol supports connections over Fibre Channel, parallel SCSI and SCSI RAID devices. It also provides for optical devices such as DVD drives. There is a method to virtualize tape as well. It should be noted that vSCSI requires more overhead in the VIO server than NPIV does – this is because vSCSI requires the VIO server to handle the I/O traffic that goes through the I/O stack in the VIO server. vSCSI supports all of the mandatory commands in the SCSI protocol but not all optional commands are supported. So it’s important to use man pages or the commands reference to check the syntax for commands to implement vSCSI.

When vSCSI and virtual Ethernet are both being used on the same VIO server it should be noted that Virtual Ethernet, having non-persistent traffic, runs at a higher priority than the vSCSI. To make sure networking traffic doesn’t starve vSCSI of CPU cycles, it’s important that threading is turned on. Threading provides the best balance of mixed throughput and has been the default since PowerVM v1.2.

With vSCSI, you have a couple of options for allocating out the disk. One option is to take a provided LUN (hdisk) and carve it up into LVs (logical volumes). The LVs can then be allocated to individual LPARs. While this works great, it’s not permitted if you want to use LPM (Live Partition Mobility). Storage used can be internal or on the SAN but only SAN-provided disk is supported for LPM or SSPs. For the most part, the preferred option is the one where the whole LUN or hdisk comes from the SAN and is allocated to the LPAR. With vSCSI the MPIO drivers are installed in the VIO servers as are the device drivers for the storage. This can result in some savings if the vendor charges for its MPIO drivers. In a dual-VIO environment, the same SAN disk would be allocated from each VIO server and AIX-native MPIO would be used in the client to provide multipathing. Mappings can be checked as follows:

lsmap -all

vSCSI is required for certain functions such as vtOpt and SSPs. VFC doesn’t support virtualization capabilities that are based on the SSP, such as thin provisioning.

Virtual Fibre Channel and NPIV

As mentioned, NPIV allows multiple LPARs to share a single Fibre channel adapter port, which allows for consolidation of Fibre adapters and frees up slots in the servers and I/O drawers. With vSCSI, the VIO server owns the adapters and sees all storage. Storage then gets mapped at the VIO server to the various client LPARs, which results in the need to keep spreadsheets to track allocations. This leaves the potential for mapping errors. With NPIV, the VIO server still owns the Fibre adapters, but the virtual adapters are owned by the client LPAR. The client LPAR sees only its own storage and the VIO server doesn’t see that storage. This makes zoning and allocating storage easier and cleaner. Another difference is that with NPIV the MPIO drivers are installed in the client LPAR not the VIO server so this can lead to increased costs if the vendor charges for those drivers.

For VFC (NPIV), the VIO server acts as a Fibre channel pass-through whereas vSCSI acts as a SCSI emulator. Two unique virtual worldwide port names (WWPNs) starting with the letter c are generated by the HMC (hardware management console) or IVM (integrated virtualization manager) for the VFC client adapter. Once the client LPAR is activated those WWPNs log in to the SAN like any other WWPNs from a physical port so that disk or tape storage target devices can be assigned to them as if they were physical Fibre channel ports. A VFC client adapter is a virtual device that provides VIO client partitions with a Fibre Channel connection to a SAN through the VIO server partition.

NPIV implementation has some specific requirements. While vSCSI can work on any of the supported adapters, NPIV requires adapters that support NPIV. Currently, these include the 8 Gigabit dual and four port Fibre adapters, the 16 Gigabit Fibre adapter and some of the new 10 Gigabit FCoE adapters. NPIV also requires that the closest switch connected to the server provides for NPIV support. The lsnports command can be used to check for this.

When using VFCs, you need to define one VFC client adapter per Fibre adapter port that you want to connect to the LPAR. These are then mapped to the LPAR from the VIO server. Mappings can be checked with:

lsmap –all –npiv

There is a maximum of 64 active VFC client adapters per physical port and a maximum of 32,000 unique WWPN pairs per system. When using NPIV, it’s better to reuse partition profiles rather than deleting them as deleting them makes those unique WWPNs unavailable for future use.

Jaqui Lynch is an independent consultant, focusing on enterprise architecture, performance and delivery on Power Systems with AIX and Linux.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.


comments powered by Disqus

Advertisement

Advertisement

2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
AIX News Sign Up Today! Past News Letters