Yes. It’s a good extension to couple with the vbdev and bdev to meet the client’s QoS goal. For a client, from my understanding, a “device” will always be linked through its own specific “connection”. The per client (or the per connection) QoS can be also achieved by the per “device”.

 

The other comment is that to move this QoS rate limiting from the client side to the target side. Client can has its own QoS for the connection before submitting I/Os and with this work, it can also let the target side to do the similar thing and may not need a QoS at client side.

 

Thanks,

Gang

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse, Paul E
Sent: Tuesday, April 10, 2018 11:56 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] About per client QoS (IOPS rate limiting)

 

Hi Gang,

 

This is pretty cool.  So essentially the PT vbdev is just a per-client hook for QoS?

 

Thx

Paul

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Cao, Gang
Sent: Monday, April 9, 2018 9:03 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: [SPDK] About per client QoS (IOPS rate limiting)

 

Hi all,

 

There has been some work done to have SPDK support the QoS per bdev which means that each bdev has its assigned IOPS rate limiting. This QoS functionality is below the storage protocols like NVMe-oF, iSCSI and so on.

 

At the time of last PRC SPDK summit, there are some questions on the per client QoS (i.e., the NVMe-oF iniator / host) besides the per bdev QoS. Each client like NVMe-oF initiator can see its own IOPS rate limiting not matter whether the connected device is exclusive or shared at the target side.

 

Based on Paul’s great stuff of the Passthru vbdev, I’ve updated through below three patches to achieve the per client QoS. Each client sees its own passthru vbdev and QoS assigned on this vbdev. These vbdev shares the same bdev. Currently it only supports shareable and read only vbdev on same bdev.

 

https://review.gerrithub.io/#/c/406888/     vbdev: update and have a create_passthru_disk function

https://review.gerrithub.io/#/c/406891/     vbdev: add the construct_passthru_bdev RPC method

https://review.gerrithub.io/#/c/406977/     vbdev: make the passthru vbdev share the same underlying bdev

 

The possible usage as following:

[Take NVMe-oF target as example]

 

1.  In the nvmf.conf file,

# Configure the Passthru vbdev on same bdev like Malloc0

[Passthru]

PT Malloc0 PT0

PT Malloc0 PT1

 

# The subsystem is configured with shareable bdev by Passthru vbdevs

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Listen RDMA 192.168.2.21:4420

  AllowAnyHost No

  Host nqn.2016-06.io.spdk:init

  SN SPDK00000000000002

  Namespace PT0   >>>> This can be connected by one client

  Namespace PT1   >>>> This can be connected by another client

 

# Assign different QoS IOPS limiting on Passthru vbdevs

[QoS]

Limit_IOPS PT0 20000

Limit_IOPS PT1 30000

 

2. Use RPC method to add the Passthru vbdev at runtime

a. python ./scripts/rpc.py construct_passthru_bdev Malloc0 PT3

b. python ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 PT3

c. python ./scripts/rpc.py enable_bdev_qos PT3 20000

 

If there is any concern or comment, please feel free to let me know.

 

Thanks,

Gang