AIX’s disk and adapter drivers each use a queue to handle IO, split into an in-service queue, and a wait queue.
Note that even though the disk is attached to the adapter, the hdisk driver code is utilized before the adapter driver code.
IO requests in the in-service queue are sent to the storage, and the queue slot is freed when the IO is complete.
IO requests in the wait queue stay there until an in-service queue slot is free, at which time they are moved to the in-service queue and sent to the storage.
IO requests in the in-service queue are also called in-flight from the perspective of the device driver.
The size of the hdisk driver in-service queue is specified by the queue_depth attribute, while the size of the adapter driver in-service queue is specified by the num_cmd_elems attribute.
root # lsattr -EHl fcs0
attribute value description user_settable
intr_priority 3 Interrupt priority False
lg_term_dma 0x800000 Long term DMA True
max_xfer_size 0x100000 Maximum Transfer Size True
num_cmd_elems 200 Maximum Number of COMMAND Elements True
sw_fc_class 2 FC Class for Fabric True
root # lsattr -EHl hdisk0
attribute value description user_settable
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True
hcheck_interval 60 Health Check Interval True
hcheck_mode enabled Health Check Mode True
max_transfer 0x40000 Maximum TRANSFER Size True
pvid 00c4c6c7b35f29770000000000000000 Physical volume identifier False
queue_depth 3 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True
A physical disk can only do one IO at a time, but knowing several of the IO requests allows the disk to do
the IOs using an elevator algorithm to minimize actuator movement and latency.
Virtual disks typically are backed by many physical disks, so can do many IOs in parallel.
Maximum LUN IOPS = queue_depth/ (avg. IO service time)
Maximum adapter IOPS = num_cmd_elems/ (avg. IO service time)
Currently default queue sizes (num_cmd_elems) for FC adapters range from 200 to 500, with maximum values of 2048 or 4096.
Note that even though the disk is attached to the adapter, the hdisk driver code is utilized before the adapter driver code.
IO requests in the in-service queue are sent to the storage, and the queue slot is freed when the IO is complete.
IO requests in the wait queue stay there until an in-service queue slot is free, at which time they are moved to the in-service queue and sent to the storage.
IO requests in the in-service queue are also called in-flight from the perspective of the device driver.
The size of the hdisk driver in-service queue is specified by the queue_depth attribute, while the size of the adapter driver in-service queue is specified by the num_cmd_elems attribute.
root # lsattr -EHl fcs0
attribute value description user_settable
intr_priority 3 Interrupt priority False
lg_term_dma 0x800000 Long term DMA True
max_xfer_size 0x100000 Maximum Transfer Size True
num_cmd_elems 200 Maximum Number of COMMAND Elements True
sw_fc_class 2 FC Class for Fabric True
root # lsattr -EHl hdisk0
attribute value description user_settable
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True
hcheck_interval 60 Health Check Interval True
hcheck_mode enabled Health Check Mode True
max_transfer 0x40000 Maximum TRANSFER Size True
pvid 00c4c6c7b35f29770000000000000000 Physical volume identifier False
queue_depth 3 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True
A physical disk can only do one IO at a time, but knowing several of the IO requests allows the disk to do
the IOs using an elevator algorithm to minimize actuator movement and latency.
Virtual disks typically are backed by many physical disks, so can do many IOs in parallel.
Maximum LUN IOPS = queue_depth/ (avg. IO service time)
Maximum adapter IOPS = num_cmd_elems/ (avg. IO service time)
Currently default queue sizes (num_cmd_elems) for FC adapters range from 200 to 500, with maximum values of 2048 or 4096.
No comments:
Post a Comment