Had an OOB chat with Nate on this – what if we rename Paul’s module to “example”, and reappropriate the “passthru” name for this new module that we are proposing? -Jim From: James Harris Date: Thursday, April 12, 2018 at 11:11 AM To: Storage Performance Development Kit Subject: Re: [SPDK] About per client QoS (IOPS rate limiting) I agree that this should probably go into a new vbdev module. We do want to keep the existing pass-through module as the example which is as simple as possible. Then if we find we need other changes to support Gang’s per-client use case, we don’t have to worry about further complicating the example. The question is then what to name it. I don’t think “qos” is the best name – since this module isn’t actually doing any QoS work – it just enables per-client QoS. All of the QoS work is done in the common bdev layer – not a bdev module. Maybe something along the lines of “shared_passthru”? I don’t particularly care for that name either – if anyone has another idea, please advise. This new module can also utilize the spdk_bdev_part API to significantly simplify the code. Paul purposely did not use spdk_bdev_part in the passthru example because he wanted to explicitly show how everything should be implemented in the example bdev module itself. But for this module, since it’s not an example module, we don’t have that same concern. -Jim From: SPDK on behalf of "Kariuki, John K" Reply-To: Storage Performance Development Kit Date: Thursday, April 12, 2018 at 10:01 AM To: Storage Performance Development Kit Subject: Re: [SPDK] About per client QoS (IOPS rate limiting) Gang/Paul Does this functionality belong in the passthru bdev or should it be implemented in another vbdev called something like “qos”? My understanding of the passthru bdev is that it’s a template that anyone looking to build a vbdev can use as a starting point. It has all the functions needed to create an SPDK module stub out so that someone can pick it up, make a copy and a few modifications and then they are off to the races implementing their new virtual bdev From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Cao, Gang Sent: Tuesday, April 10, 2018 6:05 PM To: Storage Performance Development Kit Subject: Re: [SPDK] About per client QoS (IOPS rate limiting) Yes. It’s a good extension to couple with the vbdev and bdev to meet the client’s QoS goal. For a client, from my understanding, a “device” will always be linked through its own specific “connection”. The per client (or the per connection) QoS can be also achieved by the per “device”. The other comment is that to move this QoS rate limiting from the client side to the target side. Client can has its own QoS for the connection before submitting I/Os and with this work, it can also let the target side to do the similar thing and may not need a QoS at client side. Thanks, Gang From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E Sent: Tuesday, April 10, 2018 11:56 PM To: Storage Performance Development Kit > Subject: Re: [SPDK] About per client QoS (IOPS rate limiting) Hi Gang, This is pretty cool. So essentially the PT vbdev is just a per-client hook for QoS? Thx Paul From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Cao, Gang Sent: Monday, April 9, 2018 9:03 PM To: Storage Performance Development Kit > Subject: [SPDK] About per client QoS (IOPS rate limiting) Hi all, There has been some work done to have SPDK support the QoS per bdev which means that each bdev has its assigned IOPS rate limiting. This QoS functionality is below the storage protocols like NVMe-oF, iSCSI and so on. At the time of last PRC SPDK summit, there are some questions on the per client QoS (i.e., the NVMe-oF iniator / host) besides the per bdev QoS. Each client like NVMe-oF initiator can see its own IOPS rate limiting not matter whether the connected device is exclusive or shared at the target side. Based on Paul’s great stuff of the Passthru vbdev, I’ve updated through below three patches to achieve the per client QoS. Each client sees its own passthru vbdev and QoS assigned on this vbdev. These vbdev shares the same bdev. Currently it only supports shareable and read only vbdev on same bdev. https://review.gerrithub.io/#/c/406888/ vbdev: update and have a create_passthru_disk function https://review.gerrithub.io/#/c/406891/ vbdev: add the construct_passthru_bdev RPC method https://review.gerrithub.io/#/c/406977/ vbdev: make the passthru vbdev share the same underlying bdev The possible usage as following: [Take NVMe-oF target as example] 1. In the nvmf.conf file, # Configure the Passthru vbdev on same bdev like Malloc0 [Passthru] PT Malloc0 PT0 PT Malloc0 PT1 # The subsystem is configured with shareable bdev by Passthru vbdevs [Subsystem2] NQN nqn.2016-06.io.spdk:cnode2 Listen RDMA 192.168.2.21:4420 AllowAnyHost No Host nqn.2016-06.io.spdk:init SN SPDK00000000000002 Namespace PT0 >>>> This can be connected by one client Namespace PT1 >>>> This can be connected by another client # Assign different QoS IOPS limiting on Passthru vbdevs [QoS] Limit_IOPS PT0 20000 Limit_IOPS PT1 30000 2. Use RPC method to add the Passthru vbdev at runtime a. python ./scripts/rpc.py construct_passthru_bdev Malloc0 PT3 b. python ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 PT3 c. python ./scripts/rpc.py enable_bdev_qos PT3 20000 If there is any concern or comment, please feel free to let me know. Thanks, Gang