Sei sulla pagina 1di 5

SAN - Queue Depth

Queue Depth is the total number of outstanding I/O that the host allows to a storage
target.
It can be configured on a per LUN basis (most common), a per HBA basis, or a per
Storage Target basis.v
set a default queue depth between 8 and 32 per LUN, and limit the number of busy
LUNs on a port to stay under the queue full condition.
Or
Queue depth is the number of I/O requests (SCSI commands) that can be queued at
one time on a storage controller. Each I/O request from the host's initiator HBA to
the storage controller's target adapter consumes a queue entry. Typically, a higher
queue depth equates to better performance. However, if the storage controller's
maximum queue depth is reached, that storage controller rejects incoming
commands by returning a QFULL response to them. If a large number of hosts are
accessing a storage controller, plan carefully to avoid QFULL conditions, which
significantly degrade system performance and can lead to errors on some systems.
https://library.netapp.com/ecmdocs/ECMP1196793/html/GUID-A055B184-08764376-9C75-35FE8C9BE832.html
Needed queue depth = (Number of I/O per second) x (Response time)
For example, if you need 40,000 I/O per second with a response time of 3
milliseconds, the needed queue depth = 40,000 x (.003) = 120.

https://community.hds.com/thread/2925
http://storagefoo.blogspot.in/2006/04/queue-depths.html
The number of outstanding IOs per physical storage port has a direct impact on
performance and scalability. Storage ports within arrays have varying queue depths,
from
256 Queues, to 512, to 1024, to 2048 per port. The number of initiators (aka
HBAs) a single storage port can support is directly related to the storage port's
available queues.
For example a port with 512 queues and a typical LUN queue depth of 32
can support up to:
512 / 32 = 16 LUNs on 1 Initiator (HBA) or 16 Initiators(HBAs) with 1 LUN
each or any combination not to exceed this number.
512 / #LUNs / #Hosts <= 32

Configurations that exceed this number are in danger of returning QFULL conditions.
A QFULL condition signals that the target/storage port is unable to process more IO
requests and thus the initiator will need to throttle IO to the storage port. As a result
of this, application response times will increase and IO activity will decrease.
Typically, initiators will throttle I/O and will gradually start increasing it again.
While most OSes can handle QFULL conditions gracefully, some mis-interpret QFULL
conditions as I/O errors. From what I recall, AIX is such an animal, where after three
successive QFULL conditions an I/O error will occur.
Having said all this, since FC traffic is by nature bursty, the probability that all
initiators will do a fast load on the LUNs at the same with the same I/O
characteristics, to the same storage port is probably low, however, it's possible and
it happens from time to time. So watch out and plan ahead.
The key message to remember is that when someone tells you that they can
connect an enormous number of hosts to their disk array, is to ask them the queue
depth setting on the host and the available queue depth per storage port.
That's the key. For random I/O, a typical LUN queue depth setting is
anywhere from 16-32. For sequential 8 is a typical setting.
https://tuf.hds.com/instructions/performance/QueueDepth.php
https://community.hds.com/docs/DOC-1000345

How to Calculate Queue Depth for Midrange Arrays


Goal :
How to calculate queue depth for midrange arrays to guarantee that you will
never exceed queue capacity
How to perform queue depth calculation
How to set queue depth (Q-depth, QDepth, MAXTAGS)
Symptom :
[Performance] Queue-full occurs often(Port-xx,Connected host
num.=xxx,Queue depth=512)
Fact :
LUN management
Hitachi Thunder 9500 V Series modular storage system (9500 V, 9580 V, 9585 V,
9570 V), DF600
Hitachi TagmaStore Adaptable Modular Storage/Workgroup Modular Storage (AMS
200, 500, 1000 and WMS 1000), DF700
Hitachi Adaptable Modular Storage 2000 family (AMS 2000) (includes 2100, 2300,
2500, 2500DC), DF800
Fix :

For the Hitachi Thunder 9500 V Series modular storage system (9500 V), Hitachi
TagmaStore Adaptable Modular Storage/Workgroup Modular Storage (AMS and
WMS), and
Hitachi Adaptable Modular Storage 2000 family (AMS 2000), you can calculate
queue depth using the following formula to guarantee that you will never exceed
queue
capacity (keep in mind that maximum performance may be achieved at higher
queue depth values):
Divide 512 by the number of LUNs. Divide again by the number of hosts,
less than or equal to 32.
Note :
There are 512 queue slots. There is no problem with queue depth values higher
than calculated above unless more than 512 total commands to the port are
exceeded or 32 commands are exceeded for a LUN. The formula above guarantees
you will never exceed the queue capacity. The maximum performance may be
achieved at higher queue depth values. The value above is quite general and
assumes all LUNs are online and available to all hosts.
In conclusion, avoid having more than 512 commands arrive at the port
simultaneously and do not exceed 32 per LUN.
Note :
Multipathing rules of thumb for Active/Active configuration
For SAS drives:
Host Queue Depth settings need to be changed to only allow a Maximum
QD of 32 to each. (SAS)
This means that if 1 x Server has 2 x HBAs with access to the LUN, then the Max QD
= 16 per HBA to that LUN.
This means that if 2 x Servers have 2 x HBAs each with access to the LUN, then the
Max QD = 8 per HBA to that LUN.
This means that if 4 x Servers have 2 x HBAs each with access to the LUN, then the
Max QD = 4 per HBA to that LUN.
For SATA drives:
Host Queue Depth settings need to be changed to only allow a Maximum
QD of 6 to each. (SATA)
This means that if 1 x Server has 2 x HBAs with access to the LUN, then the Max
QD = 3 per HBA to that LUN.
This means that if 2 x Servers have 2 x HBAs each with access to the LUN, then the
Max QD = 2 per HBA to that LUN (this will give a total queue of 8).
This means that if 4 x Servers have 2 x HBAs each with access to the LUN, then the
Max QD = 1 per HBA to that LUN (this will give a total queue of 8).
* For recommendations on setting queue depth, please review the latest Host Install
Guide for your host and array type.

* This equation does not apply to array-to-array configurations. Please see the
related guide to configure externalized storage.
ESX HBA queue depth

https://social.technet.microsoft.com/Forums/windowsserver/en-US/0e76444d-b14d4313-bed5-02d2133fbf0c/disk-queue-depth-vs-disk-queue-length?
forum=winservergen

Disk Queue depth is a variable that modern drives use in calculating the elevator
algorithm. But it is also much more than that and it is derived from current disk
activity and number of disks in your given scenario.
When a disk head reads or writes from/to a disk it can only move in one direction until it has
serviced all the reads/writes (requests) it has in it's queue (disk queue length). Once that
queue's length is exhausted (read and written in that direction), it can move in the other
direction. Data that did not fall in sequentially from the location of the head when it was
recieved, is queued up (disk queue length) until the head moves the other way (how far away is it
and how long till we get there - disk queue depth)
Depending on where the data is located on the disk (how far away from the current location of
the head) and the direction that the head is moving in when the requests enter the queue, adds
to the length of the queue and the could possibly add to the depth, if for example that data exists
on disk in the opposite direction than the disk head is moving; because the head cant move back
in the direction of the data until it has serviced the current queue.
http://techbus.safaribooksonline.com/book/programming/microsoftaspdotnet/9781430243380/chapter-10-infrastructure-and-operations/navpoint-124

Hello,
The queue length is the number of pending i/o read write to the disk drive system. The depth is
the number of drives needed to eliminate the disk queue length. You can increase performance
and reduce bottlenecks (queue length) by adding more drives (depth).
Hello,

Disk queue length is used to determine the number of I/O requests queued for service, track Avg.
Disk Queue Length for LogicalDisk.
The queue depth is the amount of outstanding read and/or write requests waiting to access the
hard drive.
sounds quite same...

Potrebbero piacerti anche