All of lore.kernel.org
 help / color / mirror / Atom feed
From: Harris, James R <james.r.harris at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] About per client QoS (IOPS rate limiting)
Date: Thu, 12 Apr 2018 18:11:26 +0000	[thread overview]
Message-ID: <DBFF53EB-BA97-48D3-8E6B-3123B5E553F2@intel.com> (raw)
In-Reply-To: C4CE0E59D8C78F49A2AB85096897D2DB9E2703AD@fmsmsx116.amr.corp.intel.com

[-- Attachment #1: Type: text/plain, Size: 5562 bytes --]

I agree that this should probably go into a new vbdev module.  We do want to keep the existing pass-through module as the example which is as simple as possible.  Then if we find we need other changes to support Gang’s per-client use case, we don’t have to worry about further complicating the example.

The question is then what to name it.  I don’t think “qos” is the best name – since this module isn’t actually doing any QoS work – it just enables per-client QoS.  All of the QoS work is done in the common bdev layer – not a bdev module.  Maybe something along the lines of “shared_passthru”?  I don’t particularly care for that name either – if anyone has another idea, please advise.

This new module can also utilize the spdk_bdev_part API to significantly simplify the code.  Paul purposely did not use spdk_bdev_part in the passthru example because he wanted to explicitly show how everything should be implemented in the example bdev module itself.  But for this module, since it’s not an example module, we don’t have that same concern.

-Jim


From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "Kariuki, John K" <john.k.kariuki(a)intel.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, April 12, 2018 at 10:01 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] About per client QoS (IOPS rate limiting)

Gang/Paul
Does this functionality belong in the passthru bdev or should it be implemented in another vbdev called something like “qos”? My understanding of the passthru bdev is that it’s a template that anyone looking to build a vbdev can use as a starting point. It has all the functions needed to create an SPDK module stub out so that someone can pick it up, make a copy and a few modifications and then they are off to the races implementing their new virtual bdev

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Cao, Gang
Sent: Tuesday, April 10, 2018 6:05 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] About per client QoS (IOPS rate limiting)

Yes. It’s a good extension to couple with the vbdev and bdev to meet the client’s QoS goal. For a client, from my understanding, a “device” will always be linked through its own specific “connection”. The per client (or the per connection) QoS can be also achieved by the per “device”.

The other comment is that to move this QoS rate limiting from the client side to the target side. Client can has its own QoS for the connection before submitting I/Os and with this work, it can also let the target side to do the similar thing and may not need a QoS at client side.

Thanks,
Gang

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Tuesday, April 10, 2018 11:56 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: Re: [SPDK] About per client QoS (IOPS rate limiting)

Hi Gang,

This is pretty cool.  So essentially the PT vbdev is just a per-client hook for QoS?

Thx
Paul

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Cao, Gang
Sent: Monday, April 9, 2018 9:03 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org<mailto:spdk(a)lists.01.org>>
Subject: [SPDK] About per client QoS (IOPS rate limiting)

Hi all,

There has been some work done to have SPDK support the QoS per bdev which means that each bdev has its assigned IOPS rate limiting. This QoS functionality is below the storage protocols like NVMe-oF, iSCSI and so on.

At the time of last PRC SPDK summit, there are some questions on the per client QoS (i.e., the NVMe-oF iniator / host) besides the per bdev QoS. Each client like NVMe-oF initiator can see its own IOPS rate limiting not matter whether the connected device is exclusive or shared at the target side.

Based on Paul’s great stuff of the Passthru vbdev, I’ve updated through below three patches to achieve the per client QoS. Each client sees its own passthru vbdev and QoS assigned on this vbdev. These vbdev shares the same bdev. Currently it only supports shareable and read only vbdev on same bdev.

https://review.gerrithub.io/#/c/406888/     vbdev: update and have a create_passthru_disk function
https://review.gerrithub.io/#/c/406891/     vbdev: add the construct_passthru_bdev RPC method
https://review.gerrithub.io/#/c/406977/     vbdev: make the passthru vbdev share the same underlying bdev

The possible usage as following:
[Take NVMe-oF target as example]

1.  In the nvmf.conf file,
# Configure the Passthru vbdev on same bdev like Malloc0
[Passthru]
PT Malloc0 PT0
PT Malloc0 PT1

# The subsystem is configured with shareable bdev by Passthru vbdevs
[Subsystem2]
  NQN nqn.2016-06.io.spdk:cnode2
  Listen RDMA 192.168.2.21:4420
  AllowAnyHost No
  Host nqn.2016-06.io.spdk:init
  SN SPDK00000000000002
  Namespace PT0   >>>> This can be connected by one client
  Namespace PT1   >>>> This can be connected by another client

# Assign different QoS IOPS limiting on Passthru vbdevs
[QoS]
Limit_IOPS PT0 20000
Limit_IOPS PT1 30000

2. Use RPC method to add the Passthru vbdev at runtime
a. python ./scripts/rpc.py construct_passthru_bdev Malloc0 PT3
b. python ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 PT3
c. python ./scripts/rpc.py enable_bdev_qos PT3 20000

If there is any concern or comment, please feel free to let me know.

Thanks,
Gang

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 15163 bytes --]

             reply	other threads:[~2018-04-12 18:11 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-12 18:11 Harris, James R [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-04-16 16:04 [SPDK] About per client QoS (IOPS rate limiting) Harris, James R
2018-04-16 14:07 Luse, Paul E
2018-04-16  9:39 Cao, Gang
2018-04-15 16:41 Luse, Paul E
2018-04-14 20:44 Luse, Paul E
2018-04-14  1:51 Cao, Gang
2018-04-13 19:52 Harris, James R
2018-04-13 14:48 Luse, Paul E
2018-04-13  8:40 Cao, Gang
2018-04-12 19:51 Harris, James R
2018-04-12 17:01 Kariuki, John K
2018-04-11  1:47 Luse, Paul E
2018-04-11  1:04 Cao, Gang
2018-04-10 15:55 Luse, Paul E
2018-04-10  4:02 Cao, Gang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBFF53EB-BA97-48D3-8E6B-3123B5E553F2@intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.