Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Table of Contents
Lab Overview - HOL-SDC-1604 - vSphere Performance Optimization ............................... 3
Lab Guidance .......................................................................................................... 4
vSphere 6 Performance Introduction....................................................................... 6
Module 1: CPU Performance, Basic Concepts and Troubleshooting (15 minutes).............. 7
Introduction to CPU Performance Troubleshooting .................................................. 8
CPU Contention ..................................................................................................... 14
Conclusion and Clean-Up ...................................................................................... 34
Module 2: CPU Performance Feature: Latency Sensitivity Setting (45 minutes) ............. 37
Introduction to Latency Sensitivity........................................................................ 38
Performance impact of the Latency Sensitivity setting ......................................... 45
Conclusion and Cleanup........................................................................................ 67
Module 3: CPU Performance Feature: Power Management Policies (15 minutes)............ 70
Introduction to, and Performance Impact of, Power Policies.................................. 71
Configuring the Server BIOS Power Management Settings ................................... 77
Configuring Host Power Management in ESXi ....................................................... 84
Conclusion............................................................................................................. 89
Module 4: vSphere Fault Tolerance (FT) and Performance (30 minutes) ......................... 91
Introduction to vSphere Fault Tolerance ................................................................ 92
Configure Lab for Fault Tolerance ........................................................................ 104
Fault Tolerance Performance ............................................................................... 123
Conclusion........................................................................................................... 129
Module 5: Memory Performance, Basic Concepts and Troubleshooting (30 minutes) ... 130
Introduction to Memory Performance Troubleshooting........................................ 131
Memory Resource Control ................................................................................... 137
Conclusion and Clean-Up .................................................................................... 158
Module 6: Memory Performance Feature: vNUMA with Memory Hot Add (30 minutes) 161
Introduction to NUMA and vNUMA....................................................................... 162
vNUMA vs. Cores per Socket ............................................................................... 170
vNUMA with Memory Hot Add ............................................................................. 187
Conclusion and Clean-Up .................................................................................... 192
Module 7: Storage Performance and Troubleshooting (30 minutes).............................. 196
Introduction to Storage Performance Troubleshooting ........................................ 197
Storage I/O Contention........................................................................................ 203
Storage Cluster and Storage DRS ....................................................................... 212
Conclusion and Clean-Up .................................................................................... 226
Module 8: Network Performance, Basic Concepts and Troubleshooting (15 minutes) ... 229
Introduction to Network Performance ................................................................. 230
Show network contention.................................................................................... 236
Conclusion and Cleanup...................................................................................... 243
Module 9: Network Performance Feature: Network IO Control with Reservations (45
minutes)........................................................................................................................ 246
Introduction to Network IO Control...................................................................... 247
HOL-SDC-1604
Page 1
HOL-SDC-1604
HOL-SDC-1604
Page 2
HOL-SDC-1604
HOL-SDC-1604
Page 3
HOL-SDC-1604
Lab Guidance
You have 90 minutes for each lab session and next to each module you can see the
estimated time to complete it. Every module can be completed by itself, and the
modules can be taken in random order, but make sure that you follow the instructions
carefully with respect to the cleanup procedure after each module. In short, all VMs
should be shut down after the completion of each module using the script instructed in
the modules. In total, there is more than six hours of content in this lab.
Lab Module List:
Lab Overview
Lab Guidance
vSphere 6 Performance Introduction
Module 1: CPU Performance, Basic Concepts and Troubleshooting (15 minutes)
Module 2: CPU Performance Feature: Latency Sensitivity Setting (45 minutes)
Module 3: CPU Performance Feature: Power Policies (15 minutes)
Module 4: CPU Performance Feature: SMP-FT (30 minutes)
Module 5: Memory Performance, Basic Concepts and Troubleshooting (30 minutes)
Module 6: Memory Performance Feature: vNUMA with Memory Hot Add (30 minutes)
Module 7: Storage Performance and Troubleshooting (30 minutes)
Module 8: Network Performance, Basic Concepts and Troubleshooting (15 minutes)
Module 9: Network Performance Feature: Network IO Control with Reservations (45
minutes)
Module 10: Performance Tool: esxtop CLI introduction (30 minutes)
Module 11: Performance Tool: esxtop for vSphere Web Client (30 minutes)
Module 12: Performance Tool: vRealize Operations, next step in performance
monitoring and Troubleshooting (30 minutes)
Lab Captains: David Morse (Modules 1, 2, 3, 4), Henrik Moenster (Modules 5, 6, 7, 12),
Robert Jensen (Module 8, 9, 10, 11)
This lab manual can be downloaded from the Hands-on Labs Document site found here:
HOL-SDC-1604
Page 4
HOL-SDC-1604
http://docs.hol.pub/catalog/
HOL-SDC-1604
Page 5
HOL-SDC-1604
HOL-SDC-1604
Page 6
HOL-SDC-1604
Module 1: CPU
Performance, Basic
Concepts and
Troubleshooting (15
minutes)
HOL-SDC-1604
Page 7
HOL-SDC-1604
HOL-SDC-1604
Page 8
HOL-SDC-1604
HOL-SDC-1604
Page 9
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 10
HOL-SDC-1604
HOL-SDC-1604
Page 11
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 12
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 13
HOL-SDC-1604
CPU Contention
Below are a list of most common CPU performance issues:
High Ready Time: A CPU is in the Ready state when the virtual machine is ready to run
but unable to run because the vSphere scheduler is unable to find physical host CPU
resources to run the virtual machine on. Ready Time above 10% could indicate CPU
contention and might impact the Performance of CPU intensive application. However,
some less CPU sensitive application and virtual machines can have much higher values
of ready time and still perform satisfactorily.
High Costop time: Costop time indicates that there are more vCPUs than necessary,
and that the excess vCPUs make overhead that drags down the performance of the VM.
The VM will likely run better with fewer vCPUs. The vCPU(s) with high costop is being
kept from running while the other, more-idle vCPUs are catching up to the busy one.
CPU Limits: CPU Limits directly prevent a virtual machine from using more than a set
amount of CPU resources. Any CPU limit might cause a CPU performance problem if the
virtual machine needs resources beyond the limit.
Host CPU Saturation: When the Physical CPUs of a vSphere host are being
consistently utilized at 85% or more then the vSphere host may be saturated. When a
vSphere host is saturated, it is more difficult for the scheduler to find free physical CPU
resources in order to run virtual machines.
Guest CPU Saturation: Guest CPU (vCPU) Saturation is when the application inside
the virtual machine is using 90% or more of the CPU resources assigned to the virtual
machine. This may be an indicator that the application is being bottlenecked on vCPU
resource. In these situations, adding additional vCPU resources to the virtual machine
might improve performance.
Incorrect SMP Usage: Using large SMP virtual machines can cause extra overhead.
Virtual machines should be correctly sized for the application that is intended to run in
the virtual machine. Some applications may only support multithreading up to a certain
number of threads. Assignment of additional vCPU to the virtual machine may cause
additional overhead. If vCPU usage shows that a machine, which is configured with
multiple vCPUs and is only using one of them. Then it might be an indicator that the
application inside the virtual machine is unable to take advantage of the additional
vCPU capacity, or that the guest OS is incorrectly configured.
Low Guest Usage: Low in-guest CPU utilization might be an indicator, that the
application is not configured correctly, or that the application is starved of some other
resource such as I/O or Memory and therefore cannot fully utilize the assigned vCPU
resources.
HOL-SDC-1604
Page 14
HOL-SDC-1604
HOL-SDC-1604
Page 15
HOL-SDC-1604
IMPORTANT NOTE: Due to changing loads in the lab environment, your values may
vary from the values shown in the screenshots.
HOL-SDC-1604
Page 16
HOL-SDC-1604
Select the perf-worker-01a virtual machine from the list of VMs on the left
Click the Monitor tab
Click Performance
Click Advanced
Click on Chart Options
HOL-SDC-1604
Page 17
HOL-SDC-1604
HOL-SDC-1604
Page 18
HOL-SDC-1604
complete or waiting for ESX level swapping to complete. These non-idle vSphere
system waits are called VMWAIT.
Ready (RDY): A CPU is in the Ready state when the virtual machine is ready to
run but unable to run because the vSphere scheduler is unable to find physical
host CPU resources to run the virtual machine on. One potential reason for
elevated Ready time is that the VM is constrained by a user-set CPU limit or
resource pool limit, reported as max limited (MLMTD).
CoStop (CSTP): Time the vCPUs of a multi-vCPU virtual machine spent waiting
to be co-started. This gives an indication of the co-scheduling overhead incurred
by the virtual machine.
Run: Time the virtual machine was running on a physical processor.
HOL-SDC-1604
Page 19
HOL-SDC-1604
HOL-SDC-1604
Page 20
HOL-SDC-1604
HOL-SDC-1604
Page 21
HOL-SDC-1604
Select
Select
Select
Select
Select
HOL-SDC-1604
esx-01a.corp.local
the Monitor tab
Performance
the Advanced view
the CPU view
Page 22
HOL-SDC-1604
HOL-SDC-1604
Page 23
HOL-SDC-1604
HOL-SDC-1604
Page 24
HOL-SDC-1604
HOL-SDC-1604
Page 25
HOL-SDC-1604
HOL-SDC-1604
Page 26
HOL-SDC-1604
HOL-SDC-1604
Page 27
HOL-SDC-1604
HOL-SDC-1604
Page 28
HOL-SDC-1604
HOL-SDC-1604
Page 29
HOL-SDC-1604
Select
Select
Select
Select
HOL-SDC-1604
perf-worker-01b
Monitor
Performance
the CPU view
Page 30
HOL-SDC-1604
Notice that the virtual machine is now using both vCPUs. This is because the OS in the
virtual machine supports CPU hot-add, and because that feature has been enabled on
the virtual machine.
Investigate performance
Notice that the performance of perf-worker-01b has increased, since we added the
additional virtual CPU.
However, this is not always the case. If the host these VMs are running on (esx-01a)
only had two physical CPUs, the addition of an additional vCPU would have caused an
overcommitment, leading to high %READY and poor performance.
Remember, most workloads are not necessarily CPU bound. The OS and the application
need to be able to be multi-threaded to get performance improvements from additional
CPUs. Most of the work that an OS is doing is typically not CPU-bound, that is, most of
their time is spent waiting for external events such as user interaction, device input, or
data retrieval, rather than executing instructions. Because otherwise-unused CPU cycles
are available to absorb the virtualization overhead, these workloads will typically have
throughput similar to native, but potentially with a slight increase in latency.
Configuring a virtual machine with more virtual CPUs (vCPUs) than its workload can use
might cause slightly increased resource usage, potentially impacting performance on
very heavily loaded systems. Common examples of this include a single-threaded
HOL-SDC-1604
Page 31
HOL-SDC-1604
HOL-SDC-1604
Page 32
HOL-SDC-1604
HOL-SDC-1604
Page 33
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 34
HOL-SDC-1604
HOL-SDC-1604
Page 35
HOL-SDC-1604
Conclusion
This concludes Module 1: CPU Performance, Basic Concepts and
Troubleshooting. We hope you have enjoyed taking it. Please do not forget to fill out
the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
HOL-SDC-1604
Page 36
HOL-SDC-1604
Module 2: CPU
Performance Feature:
Latency Sensitivity
Setting (45 minutes)
HOL-SDC-1604
Page 37
HOL-SDC-1604
HOL-SDC-1604
Page 38
HOL-SDC-1604
If the latency sensitivity feature is not relevant to your environment, feel free to choose
a different module.
HOL-SDC-1604
Page 39
HOL-SDC-1604
HOL-SDC-1604
Page 40
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 41
HOL-SDC-1604
HOL-SDC-1604
Page 42
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 43
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 44
HOL-SDC-1604
HOL-SDC-1604
Page 45
HOL-SDC-1604
collecting the benchmark results from those CPU intensive workloads. These VMs perfworker-05a and perf-worker-06a will create high demand for CPU on the host, which will
help us demonstrate the Latency Sensitivity feature.
IMPORTANT NOTE: Due to changing loads in the lab environment, your values may
vary from the values shown in the screenshots.
HOL-SDC-1604
Page 46
HOL-SDC-1604
Edit perf-worker-04a
We will use the perf-worker-04a virtual machine to demonstrate the Latency Sensitivity
feature. To show how the 'High' Latency Sensitivity setting affects network latency, we
will compare network performance between perf-worker-04a with Latency Sensitivity set
to 'Normal' and that same VM with Latency Sensitivity set to 'High'.
The Latency Sensitivity feature, when set to 'High', has two VM resource requirements.
For best performance, it needs 100% memory reservation and 100% CPU reservation.
To make a fair comparison, both the 'Normal' latency sensitivity VM and the 'High'
latency sensitivity VM should have the same resource reservations, so that the only
difference between the two is the 'High' latency sensitivity setting.
First, we will create resource allocations for the perf-worker-04a virtual machine while
Latency Sensitivity is set to "Normal".
1. Select perf-worker-04a
2. Select edit settings
HOL-SDC-1604
Page 47
HOL-SDC-1604
HOL-SDC-1604
Page 48
HOL-SDC-1604
HOL-SDC-1604
Page 49
HOL-SDC-1604
Power on perf-worker-04a
1. Select "perf-worker-04a"
2. Select "Power"
HOL-SDC-1604
Page 50
HOL-SDC-1604
HOL-SDC-1604
Page 51
HOL-SDC-1604
Select esx-02a.corp.local
Select Monitor
Select Performance
Select Advanced
You can see that the Latest value for esx-02a.corp.local Usage should be close to
100%. This indicates that the perf-worker-05a and perf-worker-06a VMs are
consuming as much CPU on the host as they can.
HOL-SDC-1604
Page 52
HOL-SDC-1604
The Resource Allocation for the 'Normal' Latency Sensitive VM shows only a small
portion of the total CPU and Memory reservation is Active. Your screen may see different
values if the VM is still booting up.
HOL-SDC-1604
Page 53
HOL-SDC-1604
SSH to perf-worker-04a
1. Select perf-worker-04a
2. Click Open
HOL-SDC-1604
Page 54
HOL-SDC-1604
test because the request is processed in the kernel and does not need to access the
application layer of the operating system.
We have finished testing network latency and throughput on the 'normal' Latency
Sensitivity VM. Do not close this PuTTY window as we will use it for reference later. We
will now change the VM to 'high' Latency Sensitivity.
HOL-SDC-1604
Page 55
HOL-SDC-1604
HOL-SDC-1604
Page 56
HOL-SDC-1604
The Latency Sensitivity feature, when set to 'High', has two VM resource requirements.
For best performance, it needs 100% memory reservation and 100% CPU reservation.
To make a fair comparison, both the 'Normal' latency sensitivity VM and the 'High'
latency sensitivity VM should have the same resource reservations, so that the only
difference between the two is the 'High' latency sensitivity setting.
First, we will create resource allocations for the perf-worker-04a virtual machine while
Latency Sensitivity is set to "Normal" (the default setting).
1. Click perf-worker-04a
2. Click Actions
3. Click Edit Settings...
HOL-SDC-1604
Page 57
HOL-SDC-1604
Select VM Options
Expand Advanced
Select High
Click OK
HOL-SDC-1604
Page 58
HOL-SDC-1604
will always appear in the Advanced Settings screen, even when the CPU reservation has
already been set high enough.
If no reservation is set, the VM is still allowed to power on and no further warnings are
made.
Power on perf-worker-04a
1. Right-click perf-worker-04a
2. Select Power
3. Click Power On
HOL-SDC-1604
Page 59
HOL-SDC-1604
HOL-SDC-1604
Page 60
HOL-SDC-1604
HOL-SDC-1604
Page 61
HOL-SDC-1604
SSH to perf-worker-04a
1. Select perf-worker-04a
2. Click Open
HOL-SDC-1604
Page 62
HOL-SDC-1604
HOL-SDC-1604
Page 63
HOL-SDC-1604
Important Note: Due to variable loads in the lab environment, your numbers may
differ from those above.
The ping test we completed sends as many ping requests to the remote VM as possible
("Back to back pings") within a one second period. As soon as one ping is returned,
another request is sent. The ping command outputs four statistics per test:
HOL-SDC-1604
Page 64
HOL-SDC-1604
HOL-SDC-1604
Page 65
HOL-SDC-1604
We have finished the network tests. Close the windows using the X on each window.
HOL-SDC-1604
Page 66
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 67
HOL-SDC-1604
To review:
1. On a powered off VM, set 100% memory reservation for the latency sensitive VM.
2. If your environment allows, set 100% CPU reservation for the latency sensitive VM
such that the MHz reserved is equal to 100% of the sum of the frequency of the VM's
vCPUs.
3. In Advanced Settings, set Latency Sensitivity to High.
If you want to learn more about running latency sensitive applications on vSphere,
consult these white papers:
http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-SensitiveWorkloads.pdf
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf
Conclusion
This concludes Module X, Module Title. We hope you have enjoyed taking it. Please
do not forget to fill out the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1: CPU Performance, Basic Concepts and Troubleshooting (15 minutes)
HOL-SDC-1604
Page 68
HOL-SDC-1604
HOL-SDC-1604
Page 69
HOL-SDC-1604
Module 3: CPU
Performance Feature:
Power Management
Policies (15 minutes)
HOL-SDC-1604
Page 70
HOL-SDC-1604
HOL-SDC-1604
Page 71
HOL-SDC-1604
HOL-SDC-1604
Page 72
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 73
HOL-SDC-1604
HOL-SDC-1604
Page 74
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 75
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 76
HOL-SDC-1604
HOL-SDC-1604
Page 77
HOL-SDC-1604
HOL-SDC-1604
Page 78
HOL-SDC-1604
HOL-SDC-1604
Page 79
HOL-SDC-1604
NOTE: Disabling power management usually results in more power being consumed by
the system, especially when it is lightly loaded. The majority of applications benefit from
the power savings offered by power management, with little or no performance impact.
Therefore, if disabling power management does not realize any increased performance,
VMware recommends that power management be re-enabled to reduce power
consumption.
HOL-SDC-1604
Page 80
HOL-SDC-1604
HOL-SDC-1604
Page 81
HOL-SDC-1604
C-states deeper than C1/C1E (typically C3 and/or C6 on Intel and AMD) are
managed by software and enable further power savings. You should enable all Cstates and the deepest C-state in the BIOS to get the best performance per watt.
This gives you the flexibility to use vSphere host power management to control
their use.
When Turbo Boost or Turbo Core is enabled, C1E and deep halt states (for
example, C3 and C6 on Intel) can sometimes even increase the performance of
certain lightly-threaded workloads (workloads that leave some hardware threads
idle). Therefore, you should enable C1E and deep C-states in the BIOS. However,
for a very few multithreaded workloads that are highly sensitive to I/O latency, Cstates can reduce performance. In these cases, you might obtain better
performance by disabling them in the BIOS. Also, C1E and deep C-states
implementation can be different for different processor vendors and generations,
so your results may vary.
Some systems have Processor Clocking Control (PCC) technology, which enables
ESXi to manage power on the host system indirectly, in cooperation with the
BIOS. This setting is usually located under the Advanced Power Management
options in the BIOS of supported HP systems and is usually called Collaborative
Power Control. As shown above, there is a Collaborative CPU Performance
Control setting in the Dell PowerEdge BIOS. With this technology, ESXi does not
manage P-states directly. It instead cooperates with the BIOS to determine the
processor clock rate. This feature was turned on by default only in ESXi for 5.0 GA
up until 5.0 U2 and has been disabled in ESXi for stability reasons. You should not
re-enable it.
HOL-SDC-1604
Page 82
HOL-SDC-1604
HOL-SDC-1604
Page 83
HOL-SDC-1604
Select
Select
Select
Select
"esx-01a.corp.local"
"Manage"
"Settings"
"Power Management" in the Hardware section (not under System)
HOL-SDC-1604
Page 84
HOL-SDC-1604
Here you can see what ACPI states that get presented to the host and what Power
Management policy is currently active. There are four Power Management policies
available in ESXi 5.0, 5.1, 5.5, 6.0 and ESXi/ESX 4.1:
High Performance
Balanced (Default)
Low Power
Custom
High Performance
The High Performance power policy maximizes performance,and uses no power
management features. It keeps CPUs in the highest P-state at all times. It uses only
the top two C-states (running and halted), not any of the deep states (for example, C3
and C6 on the latest Intel processors). High performance was the default power policy
for ESX/ESXi releases prior to 5.0.
HOL-SDC-1604
Page 85
HOL-SDC-1604
Balanced (default)
The Balanced power policy is designed to reduce host power consumption while
having little or no impact on performance. The balanced policy uses an algorithm that
exploits the processors P-states. This is the default power policy since ESXi
5.0. Beginning in ESXi 5.5, we now also use deep C-states (greater than C1) in the
Balanced power policy. Formerly, when a CPU was idle, it would always enter C1. Now
ESXi chooses a suitable deep C-state depending on its estimate of when the CPU will
next need to wake up.
Low Power
The Low Power policy is designed to save substantially more power than the
Balanced policy by making the P-state and C-state selection algorithms more
aggressive, at the risk of reduced performance.
HOL-SDC-1604
Page 86
HOL-SDC-1604
Custom
The Custom power policy starts out the same as Balanced, but allows individual
parameters to be modified.
Click "Cancel" to exit.
The next step describes settings that control the Custom power policy.
HOL-SDC-1604
Page 87
HOL-SDC-1604
Power.MaxFreqPct : Do not use any P-states faster than the given percentage
of full CPU speed, rounded up to the next available P-state.
Power.MinFreqPct : Do not use any P-states slower than the given percentage
of full CPU speed.
Power.PerfBias : Performance Energy Bias Hint (Intel only). Sets an MSR on Intel
processors to an Intel-recommended value. Intel recommends 0 for high
performance, 6 for balanced, and 15 for low power. Other values are undefined.
Power.TimerHz : Controls how many times per second ESXi reevaluates which Pstate each CPU should be in.
Power.UseCStates : Use deep ACPI C-states (C2 or below) when the processor is
idle.
Power.UsePStates : Use ACPI P-states to save power when the processor is
busy.
HOL-SDC-1604
Page 88
HOL-SDC-1604
Conclusion
Key takeaways
We hope that you now know how to change power policies, both at the server BIOS level
and also within ESXi itself.
To summarize, here are some best practices around power management policies:
Configure your physical host (server BIOS) to OS Control mode as the power
policy. If applicable, enable Turbo mode, C-States (including deep C-states),
which are usually the default.
Within ESXi, the default Balanced power management policy will achieve the
best performance per watt for most workloads.
For applications that require maximum performance, switch the BIOS power
policy and/or the ESXi power management policy to Maximum Performance
and High Performance respectively. This includes latency-sensitive applications
that must execute within strict constraints on response time. Be aware, however,
that this typically only results in minimal performance gain, but disables all
potential power savings.
Depending on your applications and the level of utilization of your ESXi hosts, the
correct power policy setting can have a great impact on both performance and energy
consumption. On modern hardware, it is possible to have ESXi control the power
management features of the hardware platform used. You can select to use predefined
policies or you can create your own custom policy.
Recent studies have shown that it is best to let ESXi control the power policy. For more
details, see the following references:
http://blogs.vmware.com/performance/2014/09/custom-power-managementsettings-power-savings-vsphere-5-5.html
http://www.vmware.com/files/pdf/techpaper/hpm-perf-vsphere55.pdf
Conclusion
This concludes Module 3: CPU Performance Feature: Power Policies. We hope you have
enjoyed taking it. Please do not forget to fill out the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1: CPU Performance, Basic Concepts and Troubleshooting (15 minutes)
Module 2: CPU Performance Feature: Latency Sensitivity Setting (45 minutes)
HOL-SDC-1604
Page 89
HOL-SDC-1604
HOL-SDC-1604
Page 90
HOL-SDC-1604
HOL-SDC-1604
Page 91
HOL-SDC-1604
HOL-SDC-1604
Page 92
HOL-SDC-1604
FT Architecture
vSphere FT is made possible by four underlying technologies: storage, runtime state,
network, transparent failover.
Storage
vSphere FT ensures the storage of the primary and secondary virtual machines is
always kept in sync. Whenever vSphere FT protection begins, an initial synchronization
HOL-SDC-1604
Page 93
HOL-SDC-1604
of the VMDKs happens using a Storage vMotion to ensure the primary and secondary
have the exact same disk state.
This initial Storage vMotion happens whenever FT is turned on, a failover occurs, or a
powered-off FT virtual machine powers on. The FT virtual machine is not considered FTprotected until the Storage vMotion completes.
After this initial synchronization, vSphere FT will mirror VMDK modifications between the
primary and secondary over the FT network to ensure the storage of the replicas
continues to be identical.
Runtime State
vSphere FT ensures the runtime state of the two replicas is always identical. It does this
by continuously capturing the active memory and precise execution state of the virtual
machine, and rapidly transferring them over a high speed network, allowing the virtual
machine to instantaneously switch from running on the primary ESXi host to the
secondary ESXi host whenever a failure occurs.
Network
The networks being used by the virtual machine are also virtualized by the underlying
ESXi host, ensuring that even after a failover, the virtual machine identity and network
connections are preserved. Similar to vSphere vMotion, vSphere FT manages the virtual
MAC address as part of the process. If the secondary virtual machine is activated,
vSphere FT pings the network switch to ensure that it is aware of the new physical
location of the virtual MAC address. Since vSphere FT preserves the storage, the precise
execution state, the network identity, and the active network connections, the result is
zero downtime and no disruption to users should an ESXi server failure occur.
Transparent Failover
If a failover occurs, vSphere FT ensures that the new primary always agrees with the old
primary about the state of the virtual machine. This is achieved by holding and only
releasing externally visible output from the virtual machine once an acknowledgment is
made from the secondary affirming that the state of the two virtual machines is
consistent (for the purposes of vSphere FT, externally visible output is network
transmissions).
Benefits of FT
vSphere FT offers the following benefits:
Provides continuous availability, for zero downtime and zero data loss with
infrastructure failures
Protects mission-critical, high-performance applications regardless of operating
system (OS)
Provides uninterrupted service through an intuitive administrative interface
HOL-SDC-1604
Page 94
HOL-SDC-1604
HOL-SDC-1604
Page 95
HOL-SDC-1604
Note that there are some differences between the vSphere editions: Standard and
Enterprise support 2 vCPU FT, while Enterprise Plus raises this to 4 vCPU support.
(credit: http://vinfrastructure.it/2015/02/vmware-vsphere-6-the-new-ft-feature/)
HOL-SDC-1604
Page 96
HOL-SDC-1604
Because vSphere FT is suitable for workloads with a maximum of four vCPUs and
64GB of memory, it can be used in tiny and small vCenter Server deployments.
Prerequisites for FT
All hosts with vSphere FT enabled require a dedicated 10Gbps low-latency
VMkernel interface for vSphere FT logging traffic.
The option to turn on vSphere FT is unavailable (dimmed) if any of these conditions
apply:
The VM resides on a host that does not have a license for the feature.
The VM resides on a host that is in maintenance mode or standby mode.
HOL-SDC-1604
Page 97
HOL-SDC-1604
HOL-SDC-1604
Page 98
HOL-SDC-1604
HOL-SDC-1604
Page 99
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 100
HOL-SDC-1604
HOL-SDC-1604
Page 101
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 102
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 103
HOL-SDC-1604
Click Cluster Site A from the Hosts and Clusters list on the left
Click Manage
Click Settings
Click vSphere HA (NOTE: you should see the message "vSphere HA is Turned
OFF" as shown above)
5. Click Edit...
We will now enable vSphere HA for the cluster.
Enable vSphere HA
1. Check Turn on vSphere HA
HOL-SDC-1604
Page 104
HOL-SDC-1604
HOL-SDC-1604
Page 105
HOL-SDC-1604
HOL-SDC-1604
Page 106
HOL-SDC-1604
HOL-SDC-1604
Page 107
HOL-SDC-1604
HOL-SDC-1604
Page 108
HOL-SDC-1604
HOL-SDC-1604
Page 109
HOL-SDC-1604
HOL-SDC-1604
Page 110
HOL-SDC-1604
HOL-SDC-1604
Page 111
HOL-SDC-1604
HOL-SDC-1604
Page 112
HOL-SDC-1604
HOL-SDC-1604
Page 113
HOL-SDC-1604
Power on perf-worker-02a
1. Left-click perf-worker-02a from the list of virtual machines on the left. Note the
slightly darker blue color for the VM, which indicates it is now fault-tolerant.
2. Click the Actions link in the upper right pane
3. Hover over the Power menu, and choose Power On
This will power on the perf-worker-02a VM, and start the procedure to make it Fault
Tolerant.
HOL-SDC-1604
Page 114
HOL-SDC-1604
HOL-SDC-1604
Page 115
HOL-SDC-1604
HOL-SDC-1604
Page 116
HOL-SDC-1604
HOL-SDC-1604
Page 117
HOL-SDC-1604
This will pop up a dialog box asking you to confirm; click Yes.
This will unregister, power off, and delete the secondary VM.
HOL-SDC-1604
Page 118
HOL-SDC-1604
HOL-SDC-1604
Page 119
HOL-SDC-1604
Click Cluster Site A from the Hosts and Clusters list on the left
Click Manage
Click Settings
Click vSphere HA (NOTE: you should see the message "vSphere HA is Turned
ON" as shown above)
5. Click Edit...
We will now disable vSphere HA for the cluster.
HOL-SDC-1604
Page 120
HOL-SDC-1604
Disable vSphere HA
1. Uncheck Turn on vSphere HA
2. Check OK
HOL-SDC-1604
Page 121
HOL-SDC-1604
HOL-SDC-1604
Page 122
HOL-SDC-1604
HOL-SDC-1604
Page 123
HOL-SDC-1604
Kernel Compile
This experiment shows the time taken to do a parallel compile of the Linux kernel. This
is a both a CPU- and MMU-intensive workload due to the forking of many parallel
processes. The CPU is 100 percent utilized. This workload does some disk reads and
writes, but generates no network traffic.
As shown in the figure above, FT protection increases the kernel compile time a small
amount -- about 7 seconds.
HOL-SDC-1604
Page 124
HOL-SDC-1604
HOL-SDC-1604
Page 125
HOL-SDC-1604
HOL-SDC-1604
Page 126
HOL-SDC-1604
Iometer
Iometer is an I/O subsystem measurement and characterization tool for Microsoft
Windows. It is designed to produce a mix of operations to stress the disk. This
benchmark ran random I/Os of various types. The bar charts above show that FTprotected VM achieves nearly the same throughput as the non-protected VM.
HOL-SDC-1604
Page 127
HOL-SDC-1604
HOL-SDC-1604
Page 128
HOL-SDC-1604
Conclusion
All Fault Tolerance solutions rely on redundancy. That means a certain cost must be paid
to establish replicas and keep them in sync. These costs come in the form of CPU,
storage, and network overheads. For a variety of workloads, CPU and storage
overheads are generally modest or minimal with FT protection. The most
noticeable overhead for FT-protected virtual machines is the increase in latency for
network packets. However, the experiments performed have shown that FT-protected
workloads can achieve good application throughput despite an increase in network
latency; network latency does not dictate overall application throughput for a wide
variety of applications. On the other hand, applications that are sensitive to network
latency (such as high frequency trading or realtime workloads) will pay a higher cost
under FT protection.
VMware vSphere Fault Tolerance is a revolutionary new technology. It universally
applies the basic principles and guarantees of fault-tolerant technology to any multivCPU workload in a uniquely simple to use way. The vSphere FT solution is able to
achieve good throughput for a wide variety of applications.
HOL-SDC-1604
Page 129
HOL-SDC-1604
Module 5: Memory
Performance, Basic
Concepts and
Troubleshooting (30
minutes)
HOL-SDC-1604
Page 130
HOL-SDC-1604
HOL-SDC-1604
Page 131
HOL-SDC-1604
HOL-SDC-1604
Page 132
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 133
HOL-SDC-1604
HOL-SDC-1604
Page 134
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 135
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 136
HOL-SDC-1604
HOL-SDC-1604
Page 137
HOL-SDC-1604
HOL-SDC-1604
Page 138
HOL-SDC-1604
Select perf-worker-02a
Return to the vSphere Web Client.
1. Select perf-worker-02a
HOL-SDC-1604
Page 139
HOL-SDC-1604
This approach ensures that a virtual machine from which idle memory is reclaimed can
ramp up quickly to its full share-based allocation when it starts using its memory more
actively.
By default, active memory is estimated once every 60 seconds. To modify this, adjust
the Mem.SamplePeriod advanced setting.
HOL-SDC-1604
Page 140
HOL-SDC-1604
Select
Select
Select
Select
"Monitor"
"Performance"
"Advanced"
the "Memory" view
Consumed memory on the host is around 4GB, but active memory is less than 3GB.
Notice that there is no memory contention, as the host has 8GB of memory.
In late 2014, VMware announced that ESXi will no longer have TPS (Transparent Page
Sharing) enabled in future releases per default, although TPS is still available. For more
information see KB: http://kb.vmware.com/kb/2080735
Transparent page sharing is a method by which redundant copies of memory pages are
eliminated (deduplicated). TPS has always been running by default, until late 2014.
However, if TPS is enabled, and you are running on modern hardware-assisted memory
virtualization systems; vSphere will preferentially back guest physical pages with large
host physical pages (2MB contiguous memory region instead of 4KB for regular pages)
for better performance. vSphere will not attempt to share large physical pages because
the probability of finding two large pages that are identical is very low. If memory
pressure occurs on the host, vSphere may break the large memory pages into regular
4KB pages, which TPS will then be able to use to consolidate memory in the host.
HOL-SDC-1604
Page 141
HOL-SDC-1604
In vSphere 6, TPS has been enhanced to support different levels of page sharing such as
intra VM sharing, inter VM sharing etc. See this article for more information:
http://kb.vmware.com/kb/2097593
HOL-SDC-1604
Page 142
HOL-SDC-1604
HOL-SDC-1604
Page 143
HOL-SDC-1604
HOL-SDC-1604
Page 144
HOL-SDC-1604
Now that memory pressure is occurring in the system, vSphere will begin to use
memory overcommit techniques to conserve memory use.
It may take a while for vCenter to update the memory utilization statistics, so you might
have to wait. (Try to refresh if nothing happens)
Notice that vSphere has used some memory overcommit techniques on the perf-worker
virtual machines to relieve the memory pressure. Notice that consumed memory for the
virtual machines is now lower than before we applied memory pressure. As long as the
Active memory the virtual machine requires can stay in physical memory the
performance of the application will perform well.
HOL-SDC-1604
Page 145
HOL-SDC-1604
Select esx-01a.corp.local
1. Select esx-01a.corp.local
HOL-SDC-1604
Page 146
HOL-SDC-1604
Select
Select
Select
Select
Monitor
Performance
Advanced
Memory
Notice that Granted and Consumed are very close to the full size of the ESX host
(8GB); Active is higher (but still less) than Consumed. You can also see how swapping
and ballooning started when we increased the memory pressure on the host. Also notice
that Swap used is relatively low. Any active swapping is a performance concern, but
relying on this metric alone can be misleading. To more accurately tell if swapping is
affecting performance, you would need to look at the Swap in rate available from the
Chart Options screen, which we will look at next. Any non-trivial Swap in rate would
likely indicate a performance problem.
HOL-SDC-1604
Page 147
HOL-SDC-1604
HOL-SDC-1604
Page 148
HOL-SDC-1604
HOL-SDC-1604
Page 149
HOL-SDC-1604
HOL-SDC-1604
Page 150
HOL-SDC-1604
Continue to see how that has impacted the memory performance measured.
HOL-SDC-1604
Page 151
HOL-SDC-1604
HOL-SDC-1604
Page 152
HOL-SDC-1604
Edit perf-worker-03a
1. Select perf-worker-03a
2. Select Summary
3. Click Edit Settings...
HOL-SDC-1604
Page 153
HOL-SDC-1604
HOL-SDC-1604
Page 154
HOL-SDC-1604
Now that we have doubled the amount of memory shares assigned to perf-worker-03a,
the virtual machine is being prioritized over perf-worker-02a and perf-worker-04a. This
results in increased memory performance of perf-worker-03a.
Shares is a way of influencing how access to a resource is prioritized between virtual
machines, but only under resource contention. It will not increase VM performance
in an underutilized environment.
Let's try and remove the memory contention, but keep the high amount of shares
assigned to perf-worker-03a.
HOL-SDC-1604
Page 155
HOL-SDC-1604
HOL-SDC-1604
Page 156
HOL-SDC-1604
HOL-SDC-1604
Page 157
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 158
HOL-SDC-1604
Conclusion
This concludes Module 5, Memory Performance, Basic Concepts and
Troubleshooting. We hope you have enjoyed taking it. Please do not forget to fill out
the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1:
Module 2:
Module 3:
Module 4:
Module 5:
minutes)
Module 6:
minutes)
Module 7:
Module 8:
minutes)
HOL-SDC-1604
Page 159
HOL-SDC-1604
HOL-SDC-1604
Page 160
HOL-SDC-1604
Module 6: Memory
Performance Feature:
vNUMA with Memory Hot
Add (30 minutes)
HOL-SDC-1604
Page 161
HOL-SDC-1604
NUMA
Non-Uniform Memory Access (NUMA) system architecture
Each node consists of CPU cores and memory. A pCPU can access memory across NUMA
nodes, but at a performance cost, and memory access time can be 30% ~ 100% longer
Without vNUMA
In this example, a VM with 12 vCPUs is running on a host with four NUMA nodes with 6
cores each. This VM is not being presented with the physical NUMA configuration and
hence the guest OS and application only sees a single NUMA node. This means that the
guest has no chance of placing processes and memory within a physical NUMA node.
HOL-SDC-1604
Page 162
HOL-SDC-1604
HOL-SDC-1604
Page 163
HOL-SDC-1604
With vNUMA
In this example, a VM with 12 vCPUs is running on a host that has four NUMA nodes with
6 cores each. This VM is being presented with the physical NUMA configuration, and
hence the guest OS and application sees two NUMA nodes. This means that the guest
can place processes and accompanying memory within a physical NUMA node when
possible.
We have good memory locality.
HOL-SDC-1604
Page 164
HOL-SDC-1604
HOL-SDC-1604
Page 165
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 166
HOL-SDC-1604
HOL-SDC-1604
Page 167
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 168
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 169
HOL-SDC-1604
HOL-SDC-1604
Page 170
HOL-SDC-1604
HOL-SDC-1604
Page 171
HOL-SDC-1604
HOL-SDC-1604
Page 172
HOL-SDC-1604
HOL-SDC-1604
Page 173
HOL-SDC-1604
HOL-SDC-1604
Page 174
HOL-SDC-1604
HOL-SDC-1604
Page 175
HOL-SDC-1604
HOL-SDC-1604
Page 176
HOL-SDC-1604
Power on perf-worker-01a VM
1. Right click perf-worker-01a
2. Select Power
3. Click Power On
HOL-SDC-1604
Page 177
HOL-SDC-1604
HOL-SDC-1604
Page 178
HOL-SDC-1604
This is valid when vNUMA is not enabled. Let's see what happens when we enable
vNUMA with this 2 cores per socket configuration.
HOL-SDC-1604
Page 179
HOL-SDC-1604
HOL-SDC-1604
Page 180
HOL-SDC-1604
HOL-SDC-1604
Page 181
HOL-SDC-1604
HOL-SDC-1604
Page 182
HOL-SDC-1604
Power on perf-worker-01a VM
1. Right click perf-worker-01a
2. Select Power
3. Click Power On
HOL-SDC-1604
Page 183
HOL-SDC-1604
HOL-SDC-1604
Page 184
HOL-SDC-1604
We saw earlier in this module that changing the cores per socket alone did not alter the
NUMA architecture presented to the VM. Now we can see that when used in combination
with vNUMA, the cores per socket configuration dictates the presented vNUMA
architecture. This means that when using the cores per socket feature on VMs with more
than 8 vCPUs (default value), the configuration dictates the vNUMA architecture
presented to the VM and therefore can have a impact on VM performance. This is
because we can force a VM to unnecessarily span multiple NUMA nodes.
HOL-SDC-1604
Page 185
HOL-SDC-1604
HOL-SDC-1604
Page 186
HOL-SDC-1604
Launch NumaExplorer
perf-worker-01a should already be running and you should have an active RDP session
to it. If not, power on perf-worker-01a and launch the RDP session from the shortcut on
the desktop, as previously in this module.
On perf-worker-01a, do the following.
1. Click Start
2. Click NumaExplorer
HOL-SDC-1604
Page 187
HOL-SDC-1604
HOL-SDC-1604
Page 188
HOL-SDC-1604
HOL-SDC-1604
Page 189
HOL-SDC-1604
HOL-SDC-1604
Page 190
HOL-SDC-1604
This experiment shows us that on a vNUMA enabled VM, memory hot add does in fact
distribute the additional memory evenly across vNUMA nodes.
HOL-SDC-1604
Page 191
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the "VMware vSphere PowerCLI"
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 192
HOL-SDC-1604
HOL-SDC-1604
Page 193
HOL-SDC-1604
socket value to ensure that the vNUMA layout of a VM fits within the smallest
physical NUMA node size.
Remember that a NUMA node consists of CPU cores and memory. So if a VM has more
memory than what will fit within a single NUMA node, and the VM has 8 or less vCPUs, it
may make sense to enable vNUMA so that the guest OS can better place vCPUs and
memory.
There has been some confusion around the performance impact of setting the cores per
socket of a VM and how vNUMA actually works. By completing this module, we have
shown that:
1. Setting the cores per socket on a VM without vNUMA has no performance impact,
and should only be used to comply with license restrictions.
2. Setting the cores per socket of a VM with vNUMA enabled can have a
performance impact and can be used to force a particular vNUMA architecture.
Use with caution!
3. vNUMA is an important feature to ensure optimal performance of larger VMs (>8
vCPUs by default)
If you want to know more about the vNUMA feature of vSphere, see these articles:
http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-CPU-Sched-Perf.pdf (paper
has not yet (june 2015) been upgraded with vSphere 6 additions)
http://blogs.vmware.com/vsphere/tag/vnuma
Conclusion
This concludes Module 6, Memory Performance Feature: vNUMA with Memory
Hot Add. We hope you have enjoyed taking it. Please do not forget to fill out the survey
when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1:
Module 2:
Module 3:
Module 4:
Module 5:
minutes)
Module 6:
minutes)
Module 7:
Module 8:
minutes)
HOL-SDC-1604
Page 194
HOL-SDC-1604
HOL-SDC-1604
Page 195
HOL-SDC-1604
Module 7: Storage
Performance and
Troubleshooting (30
minutes)
HOL-SDC-1604
Page 196
HOL-SDC-1604
So, if you want to know how many IOPs you can achieve with a given number of disks:
Total Raw IOPS = Disk IOPS * Number of disks
Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)
This test demonstrates some methods to identify poor storage performance, and how to
resolve it using VMware Storage DRS for workload balancing. The first step is to prepare
the environment for the demonstration.
HOL-SDC-1604
Page 197
HOL-SDC-1604
HOL-SDC-1604
Page 198
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 199
HOL-SDC-1604
HOL-SDC-1604
Page 200
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 201
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 202
HOL-SDC-1604
HOL-SDC-1604
Page 203
HOL-SDC-1604
latency that application sees and it include the latencies off the total storage stack
including the guest OS, the VMKernel virtualization layers, and the physical hardware.
ESXi cant see application latency because that is a layer above the ESXi virtualization
layer.
From ESXi we see 3 main latencies that are reported in esxtop and vCenter.
The top most is GAVG, or Guest Average latency, that is the total amount of latency that
ESXi can detect.
That is not saying this is the total amount of latency the application will see, in fact if
you compare the GAVG (the Total Amount of Latency ESX is seeing) and the Actual
latency the Application is seeing, you can tell how much latency the Guest OS is adding
to the storage stack and that could tell you if the guest OS is configured incorrectly or is
causing a performance problem. For example, if ESX is reporting GAVG of 10ms, but the
application or perfmon in the guest OS is reporting Storage Latency of 30ms, that
means that 20ms of latency is somehow building up in the Guest OS Layer, and you
should focus your debugging on the Guest OSs storage configuration.
Ok, now GAVG is made up of 2 major components KAVG and DAVG, DAVG = basically
how much time is spent in the Device from the driver HBA and storage array, and KAVG
= how much time is spent in the ESXi Kernel (so how much over is the kernel adding).
KAVG is actually a derived metric - ESXi does not specifically calculate KAVG. ESXi
calculates KAVG with the following formula:
Total Latency DAVG = KAVG.
The VMKernel is very efficient in processing IO, so there really should not be any
significant time that an IO should wait in the kernel or KAVG, so KAVG should be equal to
0 in well configured / running environments. When KAVG is not equal to 0, then that
most likely means that the IO is stuck in a Kernel Queue inside the VMKernel. So the
vast majority of the time KAVG will equal QAVG or Queue Average latency (The amount
of time an IO is stuck in a queue waiting for a slot in a lower queue to free up so it can
move down the stack).
HOL-SDC-1604
Page 204
HOL-SDC-1604
HOL-SDC-1604
Page 205
HOL-SDC-1604
HOL-SDC-1604
Page 206
HOL-SDC-1604
HOL-SDC-1604
Page 207
HOL-SDC-1604
Select perf-worker-03a
1. Select "perf-worker-03a"
HOL-SDC-1604
Page 208
HOL-SDC-1604
Select "Monitor"
Select "Performance"
Select "Advanced"
Click "Chart Options"
HOL-SDC-1604
Page 209
HOL-SDC-1604
The disk that Iometer uses for generating workload is scsi0:1, or sdb inside the guest.
HOL-SDC-1604
Page 210
HOL-SDC-1604
vSphere provides several storage features to help manage and control storage
performance:
HOL-SDC-1604
Page 211
HOL-SDC-1604
HOL-SDC-1604
Page 212
HOL-SDC-1604
HOL-SDC-1604
Page 213
HOL-SDC-1604
HOL-SDC-1604
Page 214
HOL-SDC-1604
HOL-SDC-1604
Page 215
HOL-SDC-1604
HOL-SDC-1604
Page 216
HOL-SDC-1604
HOL-SDC-1604
Page 217
HOL-SDC-1604
HOL-SDC-1604
Page 218
HOL-SDC-1604
HOL-SDC-1604
Page 219
HOL-SDC-1604
Select "DatastoreCluster"
Select the "Monitor" tab
Select "Storage DRS"
Click "Run Storage DRS Now"
Click "Apply Recommendations"
Notice that SDRS recommends moving one of the workloads from DatastoreA to
DatastoreB. It is making the recommendation based on space. SDRS makes storage
moves based on performance only after it has collect performance data for more than 8
hours. Since the workloads just recently started SDRS would not make a
recommendation to balance the workloads based on performance until it has collected
more data.
HOL-SDC-1604
Page 220
HOL-SDC-1604
4. Investigate the different settings you can configure for Storage DRS
A number of enhancements has been made to Storage DRS in vSphere 6.0, in order to
remove some of the previous limitations of Storage DRS.
Storage DRS has improved interoperability with deduplicated datastores, so that
Storage DRS is able to identify if datastores are baked by the same deduplication pool
or not, and hence avoid moving a VM to a datastore using a different deduplication pool.
Storage DRS has improved interoperability with thin provisioned datastores, so that
Storage DRS is able to identify if thin provisioned datastores are baked by the same
storage pool or not, and hence avoid moving a VM between datastores using the same
storage pool.
Storage DRS has improved interoperability with Array-based auto-tiering, so that
Storage DRS can identify datastores with auto-tiering, and treat them differently,
according to the type and frequency of auto-tiering.
Common for all these improvements is that they all require VASA 2.0, which
requires that the storage vendor has an updated storage provider.
HOL-SDC-1604
Page 221
HOL-SDC-1604
HOL-SDC-1604
Page 222
HOL-SDC-1604
HOL-SDC-1604
Page 223
HOL-SDC-1604
HOL-SDC-1604
Page 224
HOL-SDC-1604
HOL-SDC-1604
Page 225
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 226
HOL-SDC-1604
Conclusion
This concludes Module 7, Storage Performance and Troubleshooting. We hope
you have enjoyed taking it. Please do not forget to fill out the survey when you are
finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1: CPU Performance, Basic Concepts and Troubleshooting (15 minutes)
Module 2: CPU Performance Feature: Latency Sensitivity Setting (45 minutes)
HOL-SDC-1604
Page 227
HOL-SDC-1604
HOL-SDC-1604
Page 228
HOL-SDC-1604
Module 8: Network
Performance, Basic
Concepts and
Troubleshooting (15
minutes)
HOL-SDC-1604
Page 229
HOL-SDC-1604
HOL-SDC-1604
Page 230
HOL-SDC-1604
HOL-SDC-1604
Page 231
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 232
HOL-SDC-1604
HOL-SDC-1604
Page 233
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 234
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 235
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
Select VM
1. Select "perf-worker-06a"
2. Select "Monitor" tab
3. Select "Performance" tab
HOL-SDC-1604
Page 236
HOL-SDC-1604
4. Select "Advanced"
5. Click "Chart Options"
HOL-SDC-1604
Page 237
HOL-SDC-1604
Select "Network"
Click "None"
Select "perf-worker-06a"
Select the Receive and Transmit packets dropped
Click "OK"
Note : If you are unable to select all the metrics shown here, wait until the script starts
the VM's and select open the "Chart options" again.
Monitor chart
Depending on the time it has taken for you to get to here, the Network load might be
done. You should still be able to see the load that was running in the charts. Notice, that
on the picture above, we ran the network twice for illustrational purposes.
1. Here you can see the graphical network load, on perf-worker-06a
2. Here you can monitor the load, of the VM and see the actual numbers, of the data
transmitted.
Some good advice on what to look for is:
Usage:
HOL-SDC-1604
Page 238
HOL-SDC-1604
If this number is to low, depending on what you expect, it might be because of problems
in the network, or in the VM.
Receive and Transmit packets dropped:
This is a good indication of contention. This means that packages are dropped, and
might need to be re-transmitted, which could be caused by contention or problems in
the network.
Let's go to the host, and see if this is a VM, or a host problem.
HOL-SDC-1604
Page 239
HOL-SDC-1604
Select Host
1.
2.
3.
4.
5.
6.
Select "esx-01a.corp.local"
Select "Monitor" tab
Select "Performance" tab
Select "Advanced"
Select "Network" from the drop down menu
Click "Chart Options"
HOL-SDC-1604
Page 240
HOL-SDC-1604
Click "None"
Select "esx-01a.corp.local"
Select "Receive and Transmit packets dropped"
Click "OK"
HOL-SDC-1604
Page 241
HOL-SDC-1604
Monitor Chart
1. See if there are any dropped packets on the host
In this example, there is no packets dropped in the host, wich indicates that this is a VM
problem.
Note that you might see different results, in the lab, due to the nature of the Hands on
Labs.
HOL-SDC-1604
Page 242
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 243
HOL-SDC-1604
Conclusion
This concludes Module 8, Network Performance, Basic Concepts and
Trubleshooting We hope you have enjoyed taking it. Please do not forget to fill out the
survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module
Module
Module
Module
HOL-SDC-1604
1:
2:
3:
4:
CPU
CPU
CPU
CPU
Page 244
HOL-SDC-1604
HOL-SDC-1604
Page 245
HOL-SDC-1604
Module 9: Network
Performance Feature:
Network IO Control with
Reservations (45
minutes)
HOL-SDC-1604
Page 246
HOL-SDC-1604
HOL-SDC-1604
Page 247
HOL-SDC-1604
Architecture
An overview of the architecture of NIOC
HOL-SDC-1604
Page 248
HOL-SDC-1604
HOL-SDC-1604
Page 249
HOL-SDC-1604
Conclusion
This concludes Module 9, Network Performance Feature: Network IO Control
with Reservations. We hope you have enjoyed taking it. Please do not forget to fill out
the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1:
Module 2:
Module 3:
Module 4:
Module 5:
minutes)
HOL-SDC-1604
Page 250
HOL-SDC-1604
Module 6: Memory Performance Feature: vNUMA with Memory Hot Add (45
minutes)
Module 7: Storage Performance and Troubleshooting (30 minutes)
Module 8: Network Performance, Basic Concepts and Troubleshooting (15
minutes)
Module 9: Network Performance Feature: Network IO Control with Reservations
(45 minutes)
Module 10: Performance Tool: esxtop CLI introduction (30 minutes)
Module 11: Performance Tool: esxtop for vSphere Web Client (30 minutes)
Module 12: Performance Tool: vRealize Operations, next step in performance
monitoring and Troubleshooting (30 minutes)
HOL-SDC-1604
Page 251
HOL-SDC-1604
HOL-SDC-1604
Page 252
HOL-SDC-1604
Introduction to esxtop
There are several tools to monitor and diagnose performance in vSphere environments.
It is best to use esxtop to diagnose and further investigate performance issues that
have already been identified through another tool or method. esxtop is not a tool
designed for monitoring performance over the long term, but is great for deep
investigation or monitoring a specific issue or VM over a defined period of time.
In this lab, which should take about 30 minutes, we will use esxtop to dive into
performance troubleshooting, in both CPU, Memory, Storage and Network. The goal of
this module is to expose you to the different views in esxtop, and to present you with
different loads, in each view. This is not meant to be a deep dive into esxtop, but to get
you comfortable with this tool so that you can use it in your own environment.
To learn more about each metric in esxtop, and what they mean, we recommend that
you look at the links at the end of this module.
In the next module, we will look at the ESXtopNGC Plugin whichdisplays host-level
statistics in new and more powerful ways by tapping into the GUI capabilities of the
vSphere Web Client.
For day-to-day performance monitoring of an entire vSphere environment, vRealize
Operations Manager (vROPs) is powerful tool that can be used to monitor your entire
virtual infrastructure. It incorporates high-level dashboard views and built in intelligence
to analyze the data and identify possible problems. Module 12 of this lab shows you
some basic functions of vROPs. We also recommend that you look at the other vROPs
lab when you are finished with this one, for better understanding of day-to-day
monitoring.
HOL-SDC-1604
Page 253
HOL-SDC-1604
HOL-SDC-1604
Page 254
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To do so, click Start and On-Screen Keyboard, or the shortcut on the Taskbar.
HOL-SDC-1604
Page 255
HOL-SDC-1604
HOL-SDC-1604
Page 256
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere Web Client should be the default home page.
Check the Use Windows session authentication checkbox.
If, for some reason, that does not work, uncheck the box and use these credentials:
User name: CORP\Administrator
Password: VMware1!
HOL-SDC-1604
Page 257
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the Refresh icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 258
HOL-SDC-1604
Open PuTTY
Click the PuTTY icon on the taskbar
HOL-SDC-1604
Page 259
HOL-SDC-1604
SSH to esx-01a
1. Select host esx-01a.corp.local
2. Click Open
HOL-SDC-1604
Page 260
HOL-SDC-1604
Start esxtop
1. From the ESXi shell, type
esxtop
and press Enter.
2.
Click the Maximize icon so we can see the maximum amount of information.
HOL-SDC-1604
Page 261
HOL-SDC-1604
HOL-SDC-1604
Page 262
HOL-SDC-1604
HOL-SDC-1604
Page 263
HOL-SDC-1604
Monitor VM load
Monitor the load on the 2 Worker VM's: perf-worker-01a and perf-worker-01b.
They should both be running at (or near) 100% guest CPU utilization. If not, then wait for
a moment and let the CPU workload startup.
One important metric to monitor is %RDY (CPU Ready). This metric is the percentage of
time a world is ready to run, but awaiting the CPU scheduler for approval. This metric
can go up to 100% per vCPU, which means that with 2 vCPU's, it has a maximum value
of 200%. A good guideline is to ensure this value is below 5% per vCPU, but it will
always depend on the application.
Look at the worker VMs to see if they go above the 5% per vCPU threshold. To force
esxtop to immediately refresh, click the Space bar.
HOL-SDC-1604
Page 264
HOL-SDC-1604
HOL-SDC-1604
Page 265
HOL-SDC-1604
HOL-SDC-1604
Page 266
HOL-SDC-1604
HOL-SDC-1604
Page 267
HOL-SDC-1604
HOL-SDC-1604
Page 268
HOL-SDC-1604
The %USED is still only around 100, which means that the CPU benchmark is still
only using one vCPU per virtual machine.
%IDLE is now around 100, which means that one vCPU is idle.
%RDY has increased, which means that even if the additional vCPU is not being
used yet, it causes some additional CPU ready time. This is due to the additional
overhead of scheduling SMP virtual machines. This is also why right-sizing your
virtual machines is important, if you want to optimize resource consumption.
HOL-SDC-1604
Page 269
HOL-SDC-1604
HOL-SDC-1604
Page 270
HOL-SDC-1604
HOL-SDC-1604
Page 271
HOL-SDC-1604
HOL-SDC-1604
Page 272
HOL-SDC-1604
HOL-SDC-1604
Page 273
HOL-SDC-1604
SWCUR :
Shows how much the VM has swapped. This should be 0, but could be ok, if the last
counters are ok.
SWR/S :
Shows how much read there is on the swap file.
SWW/S :
Shows how much write there is on the swap file.
Depending on the lab, all counters should be ok. But due to the nature of the nested lab,
it's unclear what you might see. So look around, and see if everything looks fine.
Power on perf-worker-04a
1. Right Click "perf-worker-04a"
2. Select "Power"
3. Click "Power On"
HOL-SDC-1604
Page 274
HOL-SDC-1604
HOL-SDC-1604
Page 275
HOL-SDC-1604
Start lab
In the PowerCLI window type
.\StartStorageTest.ps1
and press enter to start the lab
The lab will take about 5 minutes to prepare. Feel free to continue, on the other steps,
while the script finishes.
After you start the script, be sure that you don't close any windows that appear.
Different views
When looking at storage in esxtop, you have multiple options to choose from.
Esxtop shows the storage statistics in three different screens:
adapter screen (d)
device screen (u)
vm screen (v)
And
vSAN (x)
HOL-SDC-1604
Page 276
HOL-SDC-1604
HOL-SDC-1604
Page 277
HOL-SDC-1604
HOL-SDC-1604
Page 278
HOL-SDC-1604
Monitor VM load
You have 4 running VM's in the Lab.
2 of them, is running IOmeter Workloads, and the other 2 are iSCSI storage targets using
RAM disk. Because they are using a RAM disk as storage target, they do not generate
any disk I/O.
The metrics to look for here is :
CMDS/S :
HOL-SDC-1604
Page 279
HOL-SDC-1604
This is the total amount of commands per second and includes IOPS (Input/Output
Operations Per Second) and other SCSI commands such as SCSI reservations, locks,
vendor string requests, unit attention commands etc. being sent to or coming from the
device or virtual machine being monitored.
In most cases, CMDS/s = IOPS unless there are a lot of metadata operations (such as
SCSI reservations)
LAT/rd and LAT/wr :
Indicates average response time or Read and Write IO, as seen by the VM.
In this case, you should see high values, in CMD/s on the worker VM's that is currently
doing IO Meter load (perf-worker-02a and 03a) indicating, that we are generating a lot of
IO.
And a high value in LAT/wr, since we are only doing writes.
The numbers can be different, on your screen, due to the nature of the Hands on labs.
HOL-SDC-1604
Page 280
HOL-SDC-1604
HOL-SDC-1604
Page 281
HOL-SDC-1604
HOL-SDC-1604
Page 282
HOL-SDC-1604
HOL-SDC-1604
Page 283
HOL-SDC-1604
Monitor load
Monitor the metrics.
Note that the result might be different, on your screen, due to the load of the
environment where the Hands On Labs is running.
HOL-SDC-1604
Page 284
HOL-SDC-1604
The screen updates automatic, but you can force a refresh, by pressing
space
The metric to watch for, is :
%DRPTX and %DRPRX :
Which is the % of sent and received packages that were dropped.
If this number goes up, it might be an indication of high network utilization.
Note that the StartNetTest.ps1 script that you ran in the first step, starts the VM's and
then waits for 2 minutes before running a network load for 5 minutes.
Depending on how fast you were, at getting to this step, you might not see any load, if it
took you more than seven minutes.
HOL-SDC-1604
Page 285
HOL-SDC-1604
HOL-SDC-1604
Page 286
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 287
HOL-SDC-1604
Conclusion
This concludes Module 10, Performance Tool: esxtop CLI introduction. We hope
you have enjoyed taking it. Please do not forget to fill out the survey when you are
finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
HOL-SDC-1604
Page 288
HOL-SDC-1604
HOL-SDC-1604
Page 289
HOL-SDC-1604
Separate tabs for CPU, memory, network and disk performance statistics
Flexible batch output
Flexible counter selection
Advanced data grid for displaying stats (sortable columns, expandable rows, etc.)
Configurable refresh rate
VM-only stats
Embedded tooltip for counter description
While the time available in this lab constrains the number of performance problems we
can review as examples, we have selected relevant problems that are commonly seen in
vSphere environments. By walking through these examples, you should be more
capable to understand and troubleshoot typical performance problems.
For the complete Performance Troubleshooting Methodology and a list of VMware Best
Practices, please visit the vmware.com website:
http://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxivcenter-server-60-monitoring-performance-guide.pdf
What is a Fling ?
From the Flings website :https://labs.vmware.com/about
Our engineers work on tons of pet projects in their spare time, and are always looking to
get feedback on their projects (or flings). Why flings? A fling is a short-term thing, not
a serious relationship but a fun one. Likewise, the tools that are offered here are
intended to be played with and explored. None of them are guaranteed to become part
of any future product offering and there is no support for them. They are, however,
totally free for you to download and play around with them!
HOL-SDC-1604
Page 290
HOL-SDC-1604
HOL-SDC-1604
Page 291
HOL-SDC-1604
On-Screen Keyboard
Another option, if you are having issues with the keyboard, is to use the On-Screen
Keyboard.
To open the On-Screen Keyboard go to "Start - On-Screen Keyboard" or use the shortcut
in the Taskbar.
HOL-SDC-1604
Page 292
HOL-SDC-1604
HOL-SDC-1604
Page 293
HOL-SDC-1604
Login to vSphere
Log into vSphere. The vSphere web client is the default start page.
1. Login using Windows session authentication.
Credentials used are:
User name: corp\Administrator
Password: VMware1!
HOL-SDC-1604
Page 294
HOL-SDC-1604
Refresh the UI
In order to reduce the amount of manual input in this lab, a lot of tasks are automated
using scripts. Therefore, it's possible that the vSphere Web Client does not reflect the
actual state of the inventory immediately after a script has run.
If you need to manually refresh the inventory, click the "Refresh" icon in the top of the
vSphere Web Client.
HOL-SDC-1604
Page 295
HOL-SDC-1604
Start Load
Type
.\StartCPUTest2.ps1
and press enter, to start the Lab.
You can continue now, but please don't close any windows since it might stop the script.
HOL-SDC-1604
Page 296
HOL-SDC-1604
Esxtop
In the vSphere web client
1.
2.
3.
4.
HOL-SDC-1604
Page 297
HOL-SDC-1604
Refresh Rate
1. Click the Set Refresh rate button
2. Change Refresh Rate to 3
3. Select OK
HOL-SDC-1604
Page 298
HOL-SDC-1604
Layout
Since the resolution within our environment is small, we need to make room for the
metrics and counters.
Start by closing all the extra windows in the Web client:
1.
2.
3.
4.
Navigator
Alarms (should already be closed)
Work in progress (should already be closed)
Recent tasks (should already be closed)
HOL-SDC-1604
Page 299
HOL-SDC-1604
Display counters
1. Click Select Display Counters
HOL-SDC-1604
Page 300
HOL-SDC-1604
Look at the worker VM's to see if they go above the 5% per vCPU. If you want esxtop to
refresh, press the space button.
HOL-SDC-1604
Page 301
HOL-SDC-1604
HOL-SDC-1604
Page 302
HOL-SDC-1604
HOL-SDC-1604
Page 303
HOL-SDC-1604
HOL-SDC-1604
Page 304
HOL-SDC-1604
HOL-SDC-1604
Page 305
HOL-SDC-1604
HOL-SDC-1604
Page 306
HOL-SDC-1604
Esxtop
1.
2.
3.
4.
5.
HOL-SDC-1604
Page 307
HOL-SDC-1604
Refresh Rate
1. Click the Set Refresh rate button
2. Change Refresh Rate to 3
3. Select OK.
HOL-SDC-1604
Page 308
HOL-SDC-1604
Layout
If you did not change the layout in the last step, then please do this now, to have
enough room, to see all the metrics possible.
Start by closing all the extra windows in the Web client
1.
2.
3.
4.
Navigator
Alarms (should already be closed)
Work in progress (should already be closed)
Recent tasks (should already be closed)
HOL-SDC-1604
Page 309
HOL-SDC-1604
Display counters
1. Click Select display counters button
GID
MCTL?
MCTLSZ
And click ok.
HOL-SDC-1604
Page 310
HOL-SDC-1604
Power on perf-worker-04a
1. Right Click "perf-worker-04a"
2. Select "Power"
HOL-SDC-1604
Page 311
HOL-SDC-1604
HOL-SDC-1604
Page 312
HOL-SDC-1604
Stop load
1. To stop the load on the lab, close the 2 vm stat collector windows.
HOL-SDC-1604
Page 313
HOL-SDC-1604
HOL-SDC-1604
Page 314
HOL-SDC-1604
Esxtop
In the vSphere client
1.
2.
3.
4.
5.
HOL-SDC-1604
Page 315
HOL-SDC-1604
Refresh Rate
1. Click the Set Refresh rate button
2. Change Refresh Rate to 3
3. Select OK.
HOL-SDC-1604
Page 316
HOL-SDC-1604
Layout
I you haven't done do, in the last steps then start by closing all the extra windows in the
Web client.
1.
2.
3.
4.
Navigator
Alarms (should already be closed)
Work in progress (should already be closed)
Recent tasks (should already be closed)
HOL-SDC-1604
Page 317
HOL-SDC-1604
Display counters
1. Click Select display counters button
GID
ID
NDK
And click ok.
HOL-SDC-1604
Page 318
HOL-SDC-1604
HOL-SDC-1604
Page 319
HOL-SDC-1604
HOL-SDC-1604
Page 320
HOL-SDC-1604
Stop load
1. To stop the lab, close the 2 IO Meter windows.
HOL-SDC-1604
Page 321
HOL-SDC-1604
Error
If you experience the error above, just select
1. Close the program
HOL-SDC-1604
Page 322
HOL-SDC-1604
HOL-SDC-1604
Page 323
HOL-SDC-1604
Esxtop
In the vSphere Web client :
1.
2.
3.
4.
HOL-SDC-1604
Page 324
HOL-SDC-1604
HOL-SDC-1604
Page 325
HOL-SDC-1604
Layout
I you haven't done do, in the last steps then start by closing all the extra windows in the
Web client.
1.
2.
3.
4.
Navigator
Alarms (should already be closed)
Work in progress (should already be closed)
Recent tasks (should already be closed)
HOL-SDC-1604
Page 326
HOL-SDC-1604
Display counters
1. Click Select display counters button
HOL-SDC-1604
Page 327
HOL-SDC-1604
Note that the StartNetTest.ps1 script that you ran in the first step, starts the VM's and
then waits for 2 minutes before running a network load for 5 minutes.
Depending on how fast you were at getting to this step, you might not see any load to
begin with since the network load has not started yet.
HOL-SDC-1604
Page 328
HOL-SDC-1604
HOL-SDC-1604
Page 329
HOL-SDC-1604
Launch PowerCLI
If the PowerCLI window is not already open, click on the VMware vSphere PowerCLI
icon in the taskbar to open a command prompt.
HOL-SDC-1604
Page 330
HOL-SDC-1604
We hope that we have showed you an alternative way of using esxtop, in the vSphere
Web Client, and that you find it useful.
If you want to know more about esxtop for vSphere Client, see these articles:
VMware flings website
https://labs.vmware.com/flings/esxtopngc-plugin
Esxtop bible
https://communities.vmware.com/docs/DOC-9279
Yellow-Bricks esxtop page
http://www.yellow-bricks.com/esxtop/
HOL-SDC-1604
Page 331
HOL-SDC-1604
Conclusion
This concludes Module 11, esxtop for vSphere Web Client. We hope you have
enjoyed taking it. Please do not forget to fill out the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
HOL-SDC-1604
Page 332
HOL-SDC-1604
HOL-SDC-1604
Page 333
HOL-SDC-1604
Architecture
An overview of the architecture of vRealize Operations Manager version 6. vRealize
Operations Manager now uses a scale out architecture, where the older version 5.X used
a scale up architecture.
HOL-SDC-1604
Page 334
HOL-SDC-1604
HOL-SDC-1604
Page 335
HOL-SDC-1604
Conclusion
This concludes Module 12, Performance Tool: vRealize Operations, next step in
performance monitoring and Troubleshooting. We hope you have enjoyed taking
it. Please do not forget to fill out the survey when you are finished.
If you have time remaining, here are the other modules that are part of this lab along
with an estimated time to complete each one. Click on 'More Options - Table of
Contents' to quickly jump to a module in the manual.
Module 1:
Module 2:
Module 3:
Module 4:
Module 5:
minutes)
Module 6:
minutes)
Module 7:
HOL-SDC-1604
Page 336
HOL-SDC-1604
HOL-SDC-1604
Page 337
HOL-SDC-1604
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-SDC-1604
Version: 20151005-065137
HOL-SDC-1604
Page 338