From: "Michael S. Tsirkin" <mst@redhat.com> To: Max Gurtovoy <mgurtovoy@nvidia.com> Cc: virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, stefanha@redhat.com, oren@nvidia.com, linux-block@vger.kernel.org, axboe@kernel.dk Subject: Re: [PATCH 1/1] virtio-blk: add num_io_queues module parameter Date: Mon, 30 Aug 2021 17:48:31 -0400 [thread overview] Message-ID: <20210830174345-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <20210830120023.22202-1-mgurtovoy@nvidia.com> On Mon, Aug 30, 2021 at 03:00:23PM +0300, Max Gurtovoy wrote: > Sometimes a user would like to control the amount of IO queues to be > created for a block device. For example, for limiting the memory > footprint of virtio-blk devices. > > Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Hmm. It's already limited by # of CPUs... Why not just limit from the hypervisor side? What's the actual use-case here? > --- > drivers/block/virtio_blk.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index e574fbf5e6df..77e8468e8593 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -24,6 +24,28 @@ > /* The maximum number of sg elements that fit into a virtqueue */ > #define VIRTIO_BLK_MAX_SG_ELEMS 32768 > > +static int virtblk_queue_count_set(const char *val, > + const struct kernel_param *kp) > +{ > + unsigned int n; > + int ret; > + > + ret = kstrtouint(val, 10, &n); > + if (ret != 0 || n > nr_cpu_ids) > + return -EINVAL; > + return param_set_uint(val, kp); > +} > + > +static const struct kernel_param_ops queue_count_ops = { > + .set = virtblk_queue_count_set, > + .get = param_get_uint, > +}; > + > +static unsigned int num_io_queues; > +module_param_cb(num_io_queues, &queue_count_ops, &num_io_queues, 0644); > +MODULE_PARM_DESC(num_io_queues, > + "Number of IO virt queues to use for blk device."); > + > static int major; > static DEFINE_IDA(vd_index_ida); > > @@ -501,7 +523,9 @@ static int init_vq(struct virtio_blk *vblk) > if (err) > num_vqs = 1; > > - num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs); > + num_vqs = min_t(unsigned int, > + min_not_zero(num_io_queues, nr_cpu_ids), > + num_vqs); > > vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); > if (!vblk->vqs) > -- > 2.18.1
WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com> To: Max Gurtovoy <mgurtovoy@nvidia.com> Cc: axboe@kernel.dk, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-block@vger.kernel.org, stefanha@redhat.com, oren@nvidia.com Subject: Re: [PATCH 1/1] virtio-blk: add num_io_queues module parameter Date: Mon, 30 Aug 2021 17:48:31 -0400 [thread overview] Message-ID: <20210830174345-mutt-send-email-mst@kernel.org> (raw) In-Reply-To: <20210830120023.22202-1-mgurtovoy@nvidia.com> On Mon, Aug 30, 2021 at 03:00:23PM +0300, Max Gurtovoy wrote: > Sometimes a user would like to control the amount of IO queues to be > created for a block device. For example, for limiting the memory > footprint of virtio-blk devices. > > Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Hmm. It's already limited by # of CPUs... Why not just limit from the hypervisor side? What's the actual use-case here? > --- > drivers/block/virtio_blk.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index e574fbf5e6df..77e8468e8593 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -24,6 +24,28 @@ > /* The maximum number of sg elements that fit into a virtqueue */ > #define VIRTIO_BLK_MAX_SG_ELEMS 32768 > > +static int virtblk_queue_count_set(const char *val, > + const struct kernel_param *kp) > +{ > + unsigned int n; > + int ret; > + > + ret = kstrtouint(val, 10, &n); > + if (ret != 0 || n > nr_cpu_ids) > + return -EINVAL; > + return param_set_uint(val, kp); > +} > + > +static const struct kernel_param_ops queue_count_ops = { > + .set = virtblk_queue_count_set, > + .get = param_get_uint, > +}; > + > +static unsigned int num_io_queues; > +module_param_cb(num_io_queues, &queue_count_ops, &num_io_queues, 0644); > +MODULE_PARM_DESC(num_io_queues, > + "Number of IO virt queues to use for blk device."); > + > static int major; > static DEFINE_IDA(vd_index_ida); > > @@ -501,7 +523,9 @@ static int init_vq(struct virtio_blk *vblk) > if (err) > num_vqs = 1; > > - num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs); > + num_vqs = min_t(unsigned int, > + min_not_zero(num_io_queues, nr_cpu_ids), > + num_vqs); > > vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); > if (!vblk->vqs) > -- > 2.18.1 _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-08-30 21:48 UTC|newest] Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-30 12:00 [PATCH 1/1] virtio-blk: add num_io_queues module parameter Max Gurtovoy 2021-08-30 16:48 ` Christoph Hellwig 2021-08-30 16:48 ` Christoph Hellwig 2021-08-30 21:48 ` Michael S. Tsirkin [this message] 2021-08-30 21:48 ` Michael S. Tsirkin 2021-08-30 23:12 ` Max Gurtovoy
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210830174345-mutt-send-email-mst@kernel.org \ --to=mst@redhat.com \ --cc=axboe@kernel.dk \ --cc=kvm@vger.kernel.org \ --cc=linux-block@vger.kernel.org \ --cc=mgurtovoy@nvidia.com \ --cc=oren@nvidia.com \ --cc=stefanha@redhat.com \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.