Category Archives: VMware vSphere 5.0

High CPU Contention reported by vCOPs

I finally had a chance to get vCenter Operations Manager setup.  It has been collecting data for about a week and I have just been clicking around to see what I can find.  I noticed that it was reporting very high CPU contention across the entire vSphere infrastructure, so I started to investigate.

In vCOPs I was in the Analysis tab and then set the focus area to CPU and clicked VM CPU Contention.

vCOPs-CPUContention

Once I clicked on this, it brought up a graph of sorts, displaying all red, which isnt good.

CPUContention-Graph

The CPU contention percent ranged from 38% up to 720% for every VM.  Here is a graph from one of the ESXi hosts.

Usage&Ready-CPUGraph

You can see its averaging around 1000ms of Ready time.  Now according to this article I found, 1000ms is about 5%, but according to vCOPs this same host is coming in at 174%, so why the large difference?
http://www.vfrank.org/2011/01/31/cpu-ready-1000-ms-equals-5/

So at this point im not sure if there is an actual issue with CPU contention or not.  So to be sure I connected to the ESXi host by SSH’ing into the vMA VM.  Once connected I ran the following command to connect to the ESXi host, and run resxtop to view CPU info:

resxtop -server 192.168.1.1
login with root and root password

At this point, resxtop will show up.

resxtop-ESXi2

I highlighted the important CPU fields in red and here are the descriptions from VMware.

Run, %RUN:This value represents the percentage of absolute time the virtual machine was running on the system.
Wait, %WAIT:This value represents the percentage of time the virtual machine was waiting for some VMkernel activity to complete (such as I/O) before it can continue.
Ready, %RDY:This value represents the percentage of time that the virtual machine is ready to execute commands, but has not yet been scheduled for CPU time due to contention with other virtual machines.
Co-stop, %CSTP:This value represents the percentage of time that the virtual machine is ready to execute commands but that it is waiting for the availability of multiple CPUs as the virtual machine is configured to use multiple vCPUs.

At this point there does not appear to be an issue with CPU contention, but I do need to find out why vCOPs is reporting it that way.

After creating a support ticket with VMware, it was determined that this area was mis-labeled and would be fixed in a later release.  Instead of the percentage of CPU contention, it is actually ms of latency and support adjusted the heatmap.

vMA and Patching Single ESXi5 host

I am in the process of rolling out a 10-user pilot for our computer lab on an older DL360G5 we have that isn’t getting used for anything at the moment.  Since I am keeping this host separate from our server infrastructure, I dont have the luxury of using update manager and moving vCenter and the other VM’s off to install updates and reboot the host on demand.

I wasn’t sure how to accomplish this, but found a link here that looked like it was what i was looking for.

In order to patch an ESXi5 host without using Update Manager I went to VMware’s patch download website and downloaded the latest set of patches.

In order to apply these patches I need vSphere Management Assistant also which is supplied by VMware as an OVF file.  I downloaded and deployed the OVF into VMware Workstation on my Desktop.  Upon starting the first time, I received the following error.

I haven’t seen this error before, or know anything about IP Pools yet, so I looked up the error and found this link which I used to resolve the issue and would allow me to boot up the vMA VM. I went through the process of setting up vMA which is pretty self explanatory.

-Next I logged into the vMA using vi-admin and my password I set.
-Next I uploaded the  .zip update file I downloaded from the VMware website onto the local datastore
– Went through the rest of this document to apply the patches to the ESXi host
http://communities.vmware.com/people/vmroyale/blog/2011/09/15/updating-esxi-5–single-use-esxcli-how-to

Advantages of VAAI and Enabling VAAI on Lefthand P4500 G2 SAN with vSphere 5

In preperation for taking the VCP5 exam I have been doing a lot of reading on vSphere 5.  One of the topics was VAAI which is vSphere Storage API’s for Array Integration(VAAI). After some Google searching, it appears that LeftHand SAN’s do support VAAI.  So that is great news, since it can dramatically increase performance.  What VAAI does it offload some of the storage tasks from the ESXi hosts, onto the storage array itself.  Below are some more specific details of what it does and how it improves performance.

Array Integration allows for Hardware-Assisted Locking which locks on a per sector basis instead of locking the entire LUN.  This can have a substantial increase in performance when a lot of changes in metadata occur, such as when many VM’s are powered on at once.

Hardware-Accelerated Full Copy allows for the storage itself to make entire copies on its own, without having to send any read/write requests through an ESXi host.  Events such as cloning VM’s or deploying new VM’s from templates have a significant reduction in storage traffic between the ESXi host and the array.

Hardware-Accelerated Block Zeroing allows storage arrays to zero out blocks very quickly and speeds up the process of creating new VM’s and formatting virtual disks.

vSphere5 is also thin provisioning aware, and when coupled with VAAI allows you to reclaim dead space and give you advanced warning when approaching out of space conditions.

VAAI Whiteboard
VAAI Demo
VAAI Performance Info

By default, VAAI is enabled and supported with ESXi 5 so nothing had to be done in vSphere 5 or on the P4500 SAN.