From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>, <hch@infradead.org>,
<virtualization@lists.linux-foundation.org>,
<kvm@vger.kernel.org>, <israelr@nvidia.com>, <nitzanc@nvidia.com>,
<oren@nvidia.com>, <linux-block@vger.kernel.org>,
<axboe@kernel.dk>
Subject: Re: [PATCH v2 1/1] virtio-blk: add num_io_queues module parameter
Date: Mon, 6 Sep 2021 14:59:40 +0300 [thread overview]
Message-ID: <1cbbe6e2-1473-8696-565c-518fc1016a1a@nvidia.com> (raw)
In-Reply-To: <20210906071957-mutt-send-email-mst@kernel.org>
On 9/6/2021 2:20 PM, Michael S. Tsirkin wrote:
> On Mon, Sep 06, 2021 at 01:31:32AM +0300, Max Gurtovoy wrote:
>> On 9/5/2021 7:02 PM, Michael S. Tsirkin wrote:
>>> On Thu, Sep 02, 2021 at 02:45:52PM +0100, Stefan Hajnoczi wrote:
>>>> On Tue, Aug 31, 2021 at 04:50:35PM +0300, Max Gurtovoy wrote:
>>>>> Sometimes a user would like to control the amount of IO queues to be
>>>>> created for a block device. For example, for limiting the memory
>>>>> footprint of virtio-blk devices.
>>>>>
>>>>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>>>>> ---
>>>>>
>>>>> changes from v1:
>>>>> - use param_set_uint_minmax (from Christoph)
>>>>> - added "Should > 0" to module description
>>>>>
>>>>> Note: This commit apply on top of Jens's branch for-5.15/drivers
>>>>> ---
>>>>> drivers/block/virtio_blk.c | 20 +++++++++++++++++++-
>>>>> 1 file changed, 19 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>>>>> index 4b49df2dfd23..9332fc4e9b31 100644
>>>>> --- a/drivers/block/virtio_blk.c
>>>>> +++ b/drivers/block/virtio_blk.c
>>>>> @@ -24,6 +24,22 @@
>>>>> /* The maximum number of sg elements that fit into a virtqueue */
>>>>> #define VIRTIO_BLK_MAX_SG_ELEMS 32768
>>>>> +static int virtblk_queue_count_set(const char *val,
>>>>> + const struct kernel_param *kp)
>>>>> +{
>>>>> + return param_set_uint_minmax(val, kp, 1, nr_cpu_ids);
>>>>> +}
>
> Hmm which tree is this for?
I've mentioned in the note that it apply on branch for-5.15/drivers. But
now you can apply it on linus/master as well.
>
>>>>> +
>>>>> +static const struct kernel_param_ops queue_count_ops = {
>>>>> + .set = virtblk_queue_count_set,
>>>>> + .get = param_get_uint,
>>>>> +};
>>>>> +
>>>>> +static unsigned int num_io_queues;
>>>>> +module_param_cb(num_io_queues, &queue_count_ops, &num_io_queues, 0644);
>>>>> +MODULE_PARM_DESC(num_io_queues,
>>>>> + "Number of IO virt queues to use for blk device. Should > 0");
>
>
> better:
>
> +MODULE_PARM_DESC(num_io_request_queues,
> + "Limit number of IO request virt queues to use for each device. 0 for now limit");
You proposed it and I replied on it bellow.
>
>
>>>>> +
>>>>> static int major;
>>>>> static DEFINE_IDA(vd_index_ida);
>>>>> @@ -501,7 +517,9 @@ static int init_vq(struct virtio_blk *vblk)
>>>>> if (err)
>>>>> num_vqs = 1;
>>>>> - num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs);
>>>>> + num_vqs = min_t(unsigned int,
>>>>> + min_not_zero(num_io_queues, nr_cpu_ids),
>>>>> + num_vqs);
>>>> If you respin, please consider calling them request queues. That's the
>>>> terminology from the VIRTIO spec and it's nice to keep it consistent.
>>>> But the purpose of num_io_queues is clear, so:
>>>>
>>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>> I did this:
>>> +static unsigned int num_io_request_queues;
>>> +module_param_cb(num_io_request_queues, &queue_count_ops, &num_io_request_queues, 0644);
>>> +MODULE_PARM_DESC(num_io_request_queues,
>>> + "Limit number of IO request virt queues to use for each device. 0 for now limit");
>> The parameter is writable and can be changed and then new devices might be
>> probed with new value.
>>
>> It can't be zero in the code. we can change param_set_uint_minmax args and
>> say that 0 says nr_cpus.
>>
>> I'm ok with the renaming but I prefer to stick to the description we gave in
>> V3 of this patch (and maybe enable value of 0 as mentioned above).
next prev parent reply other threads:[~2021-09-06 11:59 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-31 13:50 [PATCH v2 1/1] virtio-blk: add num_io_queues module parameter Max Gurtovoy
2021-09-01 5:21 ` Christoph Hellwig
2021-09-02 13:45 ` Stefan Hajnoczi
2021-09-05 16:02 ` Michael S. Tsirkin
2021-09-05 22:31 ` Max Gurtovoy
2021-09-06 11:20 ` Michael S. Tsirkin
2021-09-06 11:59 ` Max Gurtovoy [this message]
2021-09-09 13:42 ` Michael S. Tsirkin
2021-09-09 15:37 ` Max Gurtovoy
2021-09-09 15:40 ` Michael S. Tsirkin
2021-09-09 15:51 ` Max Gurtovoy
2021-09-09 16:31 ` Michael S. Tsirkin
2021-09-09 16:45 ` Max Gurtovoy
2021-09-09 22:57 ` Michael S. Tsirkin
2021-09-11 12:56 ` Max Gurtovoy
2021-09-12 9:07 ` Michael S. Tsirkin
2021-09-12 9:37 ` Max Gurtovoy
2021-09-12 9:50 ` Michael S. Tsirkin
2021-09-12 10:33 ` Max Gurtovoy
2021-09-12 11:45 ` Michael S. Tsirkin
2021-09-13 14:23 ` Max Gurtovoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1cbbe6e2-1473-8696-565c-518fc1016a1a@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=israelr@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=mst@redhat.com \
--cc=nitzanc@nvidia.com \
--cc=oren@nvidia.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).