All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs()
@ 2022-04-05 15:09 Suwan Kim
  2022-04-05 15:09 ` [PATCH v5 1/2] virtio-blk: support polling I/O Suwan Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Suwan Kim @ 2022-04-05 15:09 UTC (permalink / raw)
  To: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch
  Cc: virtualization, linux-block, Suwan Kim

This patch serise adds support for polling I/O and mq_ops->queue_rqs()
to virtio-blk driver.

Changes

v4 -> v5
    - patch1 : virtblk_poll
        - Replace "req_done" with "found" in virtblk_poll()
        - Split for loop into two distinct for loop in init_vq()
          that sets callback function for each default/poll queues
        - Replace "if (i == HCTX_TYPE_DEFAULT)" with "i != HCTX_TYPE_POLL"
          in virtblk_map_queues()
        - Replace "virtblk_unmap_data(req, vbr);" with
          "virtblk_unmap_data(req, blk_mq_rq_to_pdu(req);"
          in virtblk_complete_batch()
    
    - patch2 : virtio_queue_rqs
        - Instead of using vbr.sg_num field, use vbr->sg_table.nents.
          So, remove sg_num field in struct virtblk_req
        - Drop the unnecessary argument of virtblk_add_req() because it
          doens't need "data_sg" and "have_data". It can be derived from "vbr"
          argument.
        - Add Reviewed-by tag from Stefan

v3 -> v4
    - patch1 : virtblk_poll
        - Add print the number of default/read/poll queues in init_vq()
        - Add blk_mq_start_stopped_hw_queues() to virtblk_poll()
              virtblk_poll()
                  ...
                  if (req_done)
                                   blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
                  ...

    - patch2 : virtio_queue_rqs
        - Modify virtio_queue_rqs() to hold lock only once when it adds
          requests to virtqueue just before virtqueue notify.
          It will guarantee that virtio_queue_rqs() will not use
          previous req again.

v2 -> v3
        - Fix warning by kernel test robot
          
            static int virtblk_poll()
                ...
                if (!blk_mq_add_to_batch(req, iob, virtblk_result(vbr),
                                                   -> vbr->status,

v1 -> v2
        - To receive the number of poll queues from user,
          use module parameter instead of QEMU uapi change.

        - Add the comment about virtblk_map_queues().

        - Add support for mq_ops->queue_rqs() to implement submit side
          batch.

Suwan Kim (2):
  virtio-blk: support polling I/O
  virtio-blk: support mq_ops->queue_rqs()

 drivers/block/virtio_blk.c | 229 +++++++++++++++++++++++++++++++++----
 1 file changed, 206 insertions(+), 23 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-05 15:09 [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs() Suwan Kim
@ 2022-04-05 15:09 ` Suwan Kim
  2022-04-06  1:43   ` Elliott, Robert (Servers)
  2022-04-06  5:00     ` Christoph Hellwig
  2022-04-05 15:09 ` [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs() Suwan Kim
  2022-04-05 15:46   ` Stefan Hajnoczi
  2 siblings, 2 replies; 13+ messages in thread
From: Suwan Kim @ 2022-04-05 15:09 UTC (permalink / raw)
  To: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch
  Cc: virtualization, linux-block, Suwan Kim

This patch supports polling I/O via virtio-blk driver. Polling
feature is enabled by module parameter "poll_queues" and it sets
dedicated polling queues for virtio-blk. This patch improves the
polling I/O throughput and latency.

The virtio-blk driver doesn't not have a poll function and a poll
queue and it has been operating in interrupt driven method even if
the polling function is called in the upper layer.

virtio-blk polling is implemented upon 'batched completion' of block
layer. virtblk_poll() queues completed request to io_comp_batch->req_list
and later, virtblk_complete_batch() calls unmap function and ends
the requests in batch.

virtio-blk reads the number of poll queues from module parameter
"poll_queues". If VM sets queue parameter as below,
("num-queues=N" [QEMU property], "poll_queues=M" [module parameter])
It allocates N virtqueues to virtio_blk->vqs[N] and it uses [0..(N-M-1)]
as default queues and [(N-M)..(N-1)] as poll queues. Unlike the default
queues, the poll queues have no callback function.

Regarding HW-SW queue mapping, the default queue mapping uses the
existing method that condsiders MSI irq vector. But the poll queue
doesn't have an irq, so it uses the regular blk-mq cpu mapping.

For verifying the improvement, I did Fio polling I/O performance test
with io_uring engine with the options below.
(io_uring, hipri, randread, direct=1, bs=512, iodepth=64 numjobs=N)
I set 4 vcpu and 4 virtio-blk queues - 2 default queues and 2 poll
queues for VM.

As a result, IOPS and average latency improved about 10%.

Test result:

- Fio io_uring poll without virtio-blk poll support
	-- numjobs=1 : IOPS = 339K, avg latency = 188.33us
	-- numjobs=2 : IOPS = 367K, avg latency = 347.33us
	-- numjobs=4 : IOPS = 383K, avg latency = 682.06us

- Fio io_uring poll with virtio-blk poll support
	-- numjobs=1 : IOPS = 385K, avg latency = 165.94us
	-- numjobs=2 : IOPS = 408K, avg latency = 313.28us
	-- numjobs=4 : IOPS = 424K, avg latency = 613.05us

Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
---
 drivers/block/virtio_blk.c | 115 ++++++++++++++++++++++++++++++++++---
 1 file changed, 108 insertions(+), 7 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 8c415be86732..51eea2a49e11 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -37,6 +37,10 @@ MODULE_PARM_DESC(num_request_queues,
 		 "0 for no limit. "
 		 "Values > nr_cpu_ids truncated to nr_cpu_ids.");
 
+static unsigned int poll_queues;
+module_param(poll_queues, uint, 0644);
+MODULE_PARM_DESC(poll_queues, "The number of dedicated virtqueues for polling I/O");
+
 static int major;
 static DEFINE_IDA(vd_index_ida);
 
@@ -81,6 +85,7 @@ struct virtio_blk {
 
 	/* num of vqs */
 	int num_vqs;
+	int io_queues[HCTX_MAX_TYPES];
 	struct virtio_blk_vq *vqs;
 };
 
@@ -548,6 +553,7 @@ static int init_vq(struct virtio_blk *vblk)
 	const char **names;
 	struct virtqueue **vqs;
 	unsigned short num_vqs;
+	unsigned int num_poll_vqs;
 	struct virtio_device *vdev = vblk->vdev;
 	struct irq_affinity desc = { 0, };
 
@@ -556,6 +562,7 @@ static int init_vq(struct virtio_blk *vblk)
 				   &num_vqs);
 	if (err)
 		num_vqs = 1;
+
 	if (!err && !num_vqs) {
 		dev_err(&vdev->dev, "MQ advertised but zero queues reported\n");
 		return -EINVAL;
@@ -565,6 +572,18 @@ static int init_vq(struct virtio_blk *vblk)
 			min_not_zero(num_request_queues, nr_cpu_ids),
 			num_vqs);
 
+	num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);
+
+	memset(vblk->io_queues, 0, sizeof(int) * HCTX_MAX_TYPES);
+	vblk->io_queues[HCTX_TYPE_DEFAULT] = num_vqs - num_poll_vqs;
+	vblk->io_queues[HCTX_TYPE_READ] = 0;
+	vblk->io_queues[HCTX_TYPE_POLL] = num_poll_vqs;
+
+	dev_info(&vdev->dev, "%d/%d/%d default/read/poll queues\n",
+				vblk->io_queues[HCTX_TYPE_DEFAULT],
+				vblk->io_queues[HCTX_TYPE_READ],
+				vblk->io_queues[HCTX_TYPE_POLL]);
+
 	vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
 	if (!vblk->vqs)
 		return -ENOMEM;
@@ -577,11 +596,17 @@ static int init_vq(struct virtio_blk *vblk)
 		goto out;
 	}
 
-	for (i = 0; i < num_vqs; i++) {
-		callbacks[i] = virtblk_done;
-		snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
-		names[i] = vblk->vqs[i].name;
-	}
+        for (i = 0; i < num_vqs - num_poll_vqs; i++) {
+                callbacks[i] = virtblk_done;
+                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
+                names[i] = vblk->vqs[i].name;
+        }
+
+        for (; i < num_vqs; i++) {
+                callbacks[i] = NULL;
+                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req_poll.%d", i);
+                names[i] = vblk->vqs[i].name;
+        }
 
 	/* Discover virtqueues and write information to configuration.  */
 	err = virtio_find_vqs(vdev, num_vqs, vqs, callbacks, names, &desc);
@@ -728,16 +753,89 @@ static const struct attribute_group *virtblk_attr_groups[] = {
 static int virtblk_map_queues(struct blk_mq_tag_set *set)
 {
 	struct virtio_blk *vblk = set->driver_data;
+	int i, qoff;
+
+	for (i = 0, qoff = 0; i < set->nr_maps; i++) {
+		struct blk_mq_queue_map *map = &set->map[i];
+
+		map->nr_queues = vblk->io_queues[i];
+		map->queue_offset = qoff;
+		qoff += map->nr_queues;
+
+		if (map->nr_queues == 0)
+			continue;
+
+		/*
+		 * Regular queues have interrupts and hence CPU affinity is
+		 * defined by the core virtio code, but polling queues have
+		 * no interrupts so we let the block layer assign CPU affinity.
+		 */
+		if (i != HCTX_TYPE_POLL)
+			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
+		else
+			blk_mq_map_queues(&set->map[i]);
+	}
+
+	return 0;
+}
+
+static void virtblk_complete_batch(struct io_comp_batch *iob)
+{
+	struct request *req;
 
-	return blk_mq_virtio_map_queues(&set->map[HCTX_TYPE_DEFAULT],
-					vblk->vdev, 0);
+	rq_list_for_each(&iob->req_list, req) {
+		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
+		virtblk_cleanup_cmd(req);
+	}
+	blk_mq_end_request_batch(iob);
+}
+
+static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+{
+	struct virtio_blk *vblk = hctx->queue->queuedata;
+	struct virtio_blk_vq *vq = hctx->driver_data;
+	struct virtblk_req *vbr;
+	unsigned long flags;
+	unsigned int len;
+	int found = 0;
+
+	spin_lock_irqsave(&vq->lock, flags);
+
+	while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
+		struct request *req = blk_mq_rq_from_pdu(vbr);
+
+		found++;
+		if (!blk_mq_add_to_batch(req, iob, vbr->status,
+						virtblk_complete_batch))
+			blk_mq_complete_request(req);
+	}
+
+	if (found)
+		blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+
+	spin_unlock_irqrestore(&vq->lock, flags);
+
+	return found;
+}
+
+static int virtblk_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+			  unsigned int hctx_idx)
+{
+	struct virtio_blk *vblk = data;
+	struct virtio_blk_vq *vq = &vblk->vqs[hctx_idx];
+
+	WARN_ON(vblk->tag_set.tags[hctx_idx] != hctx->tags);
+	hctx->driver_data = vq;
+	return 0;
 }
 
 static const struct blk_mq_ops virtio_mq_ops = {
 	.queue_rq	= virtio_queue_rq,
 	.commit_rqs	= virtio_commit_rqs,
+	.init_hctx	= virtblk_init_hctx,
 	.complete	= virtblk_request_done,
 	.map_queues	= virtblk_map_queues,
+	.poll		= virtblk_poll,
 };
 
 static unsigned int virtblk_queue_depth;
@@ -816,6 +914,9 @@ static int virtblk_probe(struct virtio_device *vdev)
 		sizeof(struct scatterlist) * VIRTIO_BLK_INLINE_SG_CNT;
 	vblk->tag_set.driver_data = vblk;
 	vblk->tag_set.nr_hw_queues = vblk->num_vqs;
+	vblk->tag_set.nr_maps = 1;
+	if (vblk->io_queues[HCTX_TYPE_POLL])
+		vblk->tag_set.nr_maps = 3;
 
 	err = blk_mq_alloc_tag_set(&vblk->tag_set);
 	if (err)
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs()
  2022-04-05 15:09 [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs() Suwan Kim
  2022-04-05 15:09 ` [PATCH v5 1/2] virtio-blk: support polling I/O Suwan Kim
@ 2022-04-05 15:09 ` Suwan Kim
  2022-04-06  5:01     ` Christoph Hellwig
  2022-04-05 15:46   ` Stefan Hajnoczi
  2 siblings, 1 reply; 13+ messages in thread
From: Suwan Kim @ 2022-04-05 15:09 UTC (permalink / raw)
  To: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch
  Cc: virtualization, linux-block, Suwan Kim

This patch supports mq_ops->queue_rqs() hook. It has an advantage of
batch submission to virtio-blk driver. It also helps polling I/O because
polling uses batched completion of block layer. Batch submission in
queue_rqs() can boost polling performance.

In queue_rqs(), it iterates plug->mq_list, collects requests that
belong to same HW queue until it encounters a request from other
HW queue or sees the end of the list.
Then, virtio-blk adds requests into virtqueue and kicks virtqueue
to submit requests.

If there is an error, it inserts error request to requeue_list and
passes it to ordinary block layer path.

For verification, I did fio test.
(io_uring, randread, direct=1, bs=4K, iodepth=64 numjobs=N)
I set 4 vcpu and 2 virtio-blk queues for VM and run fio test 5 times.
It shows about 2% improvement.

                                 |   numjobs=2   |   numjobs=4
      -----------------------------------------------------------
        fio without queue_rqs()  |   291K IOPS   |   238K IOPS
      -----------------------------------------------------------
        fio with queue_rqs()     |   295K IOPS   |   243K IOPS

For polling I/O performance, I also did fio test as below.
(io_uring, hipri, randread, direct=1, bs=512, iodepth=64 numjobs=4)
I set 4 vcpu and 2 poll queues for VM.
It shows about 2% improvement in polling I/O.

                                      |   IOPS   |  avg latency
      -----------------------------------------------------------
        fio poll without queue_rqs()  |   424K   |   613.05 usec
      -----------------------------------------------------------
        fio poll with queue_rqs()     |   435K   |   601.01 usec

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
---
 drivers/block/virtio_blk.c | 114 +++++++++++++++++++++++++++++++------
 1 file changed, 98 insertions(+), 16 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 51eea2a49e11..9260522eaaa6 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -108,8 +108,7 @@ static inline blk_status_t virtblk_result(struct virtblk_req *vbr)
 	}
 }
 
-static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
-		struct scatterlist *data_sg, bool have_data)
+static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr)
 {
 	struct scatterlist hdr, status, *sgs[3];
 	unsigned int num_out = 0, num_in = 0;
@@ -117,11 +116,11 @@ static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
 	sg_init_one(&hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
 	sgs[num_out++] = &hdr;
 
-	if (have_data) {
+	if (vbr->sg_table.nents) {
 		if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
-			sgs[num_out++] = data_sg;
+			sgs[num_out++] = vbr->sg_table.sgl;
 		else
-			sgs[num_out + num_in++] = data_sg;
+			sgs[num_out + num_in++] = vbr->sg_table.sgl;
 	}
 
 	sg_init_one(&status, &vbr->status, sizeof(vbr->status));
@@ -311,6 +310,28 @@ static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx)
 		virtqueue_notify(vq->vq);
 }
 
+static blk_status_t virtblk_prep_rq(struct blk_mq_hw_ctx *hctx,
+					struct virtio_blk *vblk,
+					struct request *req,
+					struct virtblk_req *vbr)
+{
+	blk_status_t status;
+
+	status = virtblk_setup_cmd(vblk->vdev, req, vbr);
+	if (unlikely(status))
+		return status;
+
+	blk_mq_start_request(req);
+
+	vbr->sg_table.nents = virtblk_map_data(hctx, req, vbr);
+	if (unlikely(vbr->sg_table.nents < 0)) {
+		virtblk_cleanup_cmd(req);
+		return BLK_STS_RESOURCE;
+	}
+
+	return BLK_STS_OK;
+}
+
 static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
 			   const struct blk_mq_queue_data *bd)
 {
@@ -318,26 +339,17 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct request *req = bd->rq;
 	struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
 	unsigned long flags;
-	int num;
 	int qid = hctx->queue_num;
 	bool notify = false;
 	blk_status_t status;
 	int err;
 
-	status = virtblk_setup_cmd(vblk->vdev, req, vbr);
+	status = virtblk_prep_rq(hctx, vblk, req, vbr);
 	if (unlikely(status))
 		return status;
 
-	blk_mq_start_request(req);
-
-	num = virtblk_map_data(hctx, req, vbr);
-	if (unlikely(num < 0)) {
-		virtblk_cleanup_cmd(req);
-		return BLK_STS_RESOURCE;
-	}
-
 	spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
-	err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg_table.sgl, num);
+	err = virtblk_add_req(vblk->vqs[qid].vq, vbr);
 	if (err) {
 		virtqueue_kick(vblk->vqs[qid].vq);
 		/* Don't stop the queue if -ENOMEM: we may have failed to
@@ -367,6 +379,75 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static bool virtblk_prep_rq_batch(struct request *req)
+{
+	struct virtio_blk *vblk = req->mq_hctx->queue->queuedata;
+	struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+
+	req->mq_hctx->tags->rqs[req->tag] = req;
+
+	return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK;
+}
+
+static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+					struct request **rqlist,
+					struct request **requeue_list)
+{
+	unsigned long flags;
+	int err;
+	bool kick;
+
+	spin_lock_irqsave(&vq->lock, flags);
+
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+
+		err = virtblk_add_req(vq->vq, vbr);
+		if (err) {
+			virtblk_unmap_data(req, vbr);
+			virtblk_cleanup_cmd(req);
+			rq_list_add(requeue_list, req);
+		}
+	}
+
+	kick = virtqueue_kick_prepare(vq->vq);
+	spin_unlock_irqrestore(&vq->lock, flags);
+
+	return kick;
+}
+
+static void virtio_queue_rqs(struct request **rqlist)
+{
+	struct request *req, *next, *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	rq_list_for_each_safe(rqlist, req, next) {
+		struct virtio_blk_vq *vq = req->mq_hctx->driver_data;
+		bool kick;
+
+		if (!virtblk_prep_rq_batch(req)) {
+			rq_list_move(rqlist, &requeue_list, req, prev);
+			req = prev;
+			if (!req)
+				continue;
+		}
+
+		if (!next || req->mq_hctx != next->mq_hctx) {
+			req->rq_next = NULL;
+			kick = virtblk_add_req_batch(vq, rqlist, &requeue_list);
+			if (kick)
+				virtqueue_notify(vq->vq);
+
+			*rqlist = next;
+			prev = NULL;
+		} else
+			prev = req;
+	}
+
+	*rqlist = requeue_list;
+}
+
 /* return id (s/n) string for *disk to *id_str
  */
 static int virtblk_get_id(struct gendisk *disk, char *id_str)
@@ -831,6 +912,7 @@ static int virtblk_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
 
 static const struct blk_mq_ops virtio_mq_ops = {
 	.queue_rq	= virtio_queue_rq,
+	.queue_rqs	= virtio_queue_rqs,
 	.commit_rqs	= virtio_commit_rqs,
 	.init_hctx	= virtblk_init_hctx,
 	.complete	= virtblk_request_done,
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs()
  2022-04-05 15:09 [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs() Suwan Kim
@ 2022-04-05 15:46   ` Stefan Hajnoczi
  2022-04-05 15:09 ` [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs() Suwan Kim
  2022-04-05 15:46   ` Stefan Hajnoczi
  2 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2022-04-05 15:46 UTC (permalink / raw)
  To: Suwan Kim; +Cc: mgurtovoy, linux-block, mst, virtualization, hch, pbonzini


[-- Attachment #1.1: Type: text/plain, Size: 2681 bytes --]

On Wed, Apr 06, 2022 at 12:09:22AM +0900, Suwan Kim wrote:
> This patch serise adds support for polling I/O and mq_ops->queue_rqs()
> to virtio-blk driver.
> 
> Changes
> 
> v4 -> v5
>     - patch1 : virtblk_poll
>         - Replace "req_done" with "found" in virtblk_poll()
>         - Split for loop into two distinct for loop in init_vq()
>           that sets callback function for each default/poll queues
>         - Replace "if (i == HCTX_TYPE_DEFAULT)" with "i != HCTX_TYPE_POLL"
>           in virtblk_map_queues()
>         - Replace "virtblk_unmap_data(req, vbr);" with
>           "virtblk_unmap_data(req, blk_mq_rq_to_pdu(req);"
>           in virtblk_complete_batch()
>     
>     - patch2 : virtio_queue_rqs
>         - Instead of using vbr.sg_num field, use vbr->sg_table.nents.
>           So, remove sg_num field in struct virtblk_req
>         - Drop the unnecessary argument of virtblk_add_req() because it
>           doens't need "data_sg" and "have_data". It can be derived from "vbr"
>           argument.
>         - Add Reviewed-by tag from Stefan
> 
> v3 -> v4
>     - patch1 : virtblk_poll
>         - Add print the number of default/read/poll queues in init_vq()
>         - Add blk_mq_start_stopped_hw_queues() to virtblk_poll()
>               virtblk_poll()
>                   ...
>                   if (req_done)
>                                    blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
>                   ...
> 
>     - patch2 : virtio_queue_rqs
>         - Modify virtio_queue_rqs() to hold lock only once when it adds
>           requests to virtqueue just before virtqueue notify.
>           It will guarantee that virtio_queue_rqs() will not use
>           previous req again.
> 
> v2 -> v3
>         - Fix warning by kernel test robot
>           
>             static int virtblk_poll()
>                 ...
>                 if (!blk_mq_add_to_batch(req, iob, virtblk_result(vbr),
>                                                    -> vbr->status,
> 
> v1 -> v2
>         - To receive the number of poll queues from user,
>           use module parameter instead of QEMU uapi change.
> 
>         - Add the comment about virtblk_map_queues().
> 
>         - Add support for mq_ops->queue_rqs() to implement submit side
>           batch.
> 
> Suwan Kim (2):
>   virtio-blk: support polling I/O
>   virtio-blk: support mq_ops->queue_rqs()
> 
>  drivers/block/virtio_blk.c | 229 +++++++++++++++++++++++++++++++++----
>  1 file changed, 206 insertions(+), 23 deletions(-)
> 
> -- 
> 2.26.3
> 

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs()
@ 2022-04-05 15:46   ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2022-04-05 15:46 UTC (permalink / raw)
  To: Suwan Kim
  Cc: mst, jasowang, pbonzini, mgurtovoy, dongli.zhang, hch,
	virtualization, linux-block

[-- Attachment #1: Type: text/plain, Size: 2681 bytes --]

On Wed, Apr 06, 2022 at 12:09:22AM +0900, Suwan Kim wrote:
> This patch serise adds support for polling I/O and mq_ops->queue_rqs()
> to virtio-blk driver.
> 
> Changes
> 
> v4 -> v5
>     - patch1 : virtblk_poll
>         - Replace "req_done" with "found" in virtblk_poll()
>         - Split for loop into two distinct for loop in init_vq()
>           that sets callback function for each default/poll queues
>         - Replace "if (i == HCTX_TYPE_DEFAULT)" with "i != HCTX_TYPE_POLL"
>           in virtblk_map_queues()
>         - Replace "virtblk_unmap_data(req, vbr);" with
>           "virtblk_unmap_data(req, blk_mq_rq_to_pdu(req);"
>           in virtblk_complete_batch()
>     
>     - patch2 : virtio_queue_rqs
>         - Instead of using vbr.sg_num field, use vbr->sg_table.nents.
>           So, remove sg_num field in struct virtblk_req
>         - Drop the unnecessary argument of virtblk_add_req() because it
>           doens't need "data_sg" and "have_data". It can be derived from "vbr"
>           argument.
>         - Add Reviewed-by tag from Stefan
> 
> v3 -> v4
>     - patch1 : virtblk_poll
>         - Add print the number of default/read/poll queues in init_vq()
>         - Add blk_mq_start_stopped_hw_queues() to virtblk_poll()
>               virtblk_poll()
>                   ...
>                   if (req_done)
>                                    blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
>                   ...
> 
>     - patch2 : virtio_queue_rqs
>         - Modify virtio_queue_rqs() to hold lock only once when it adds
>           requests to virtqueue just before virtqueue notify.
>           It will guarantee that virtio_queue_rqs() will not use
>           previous req again.
> 
> v2 -> v3
>         - Fix warning by kernel test robot
>           
>             static int virtblk_poll()
>                 ...
>                 if (!blk_mq_add_to_batch(req, iob, virtblk_result(vbr),
>                                                    -> vbr->status,
> 
> v1 -> v2
>         - To receive the number of poll queues from user,
>           use module parameter instead of QEMU uapi change.
> 
>         - Add the comment about virtblk_map_queues().
> 
>         - Add support for mq_ops->queue_rqs() to implement submit side
>           batch.
> 
> Suwan Kim (2):
>   virtio-blk: support polling I/O
>   virtio-blk: support mq_ops->queue_rqs()
> 
>  drivers/block/virtio_blk.c | 229 +++++++++++++++++++++++++++++++++----
>  1 file changed, 206 insertions(+), 23 deletions(-)
> 
> -- 
> 2.26.3
> 

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-05 15:09 ` [PATCH v5 1/2] virtio-blk: support polling I/O Suwan Kim
@ 2022-04-06  1:43   ` Elliott, Robert (Servers)
  2022-04-06 13:36     ` Suwan Kim
  2022-04-06  5:00     ` Christoph Hellwig
  1 sibling, 1 reply; 13+ messages in thread
From: Elliott, Robert (Servers) @ 2022-04-06  1:43 UTC (permalink / raw)
  To: 'Suwan Kim',
	mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch
  Cc: virtualization, linux-block



> -----Original Message-----
> From: Suwan Kim <suwan.kim027@gmail.com>
> Sent: Tuesday, April 5, 2022 10:09 AM
> Subject: [PATCH v5 1/2] virtio-blk: support polling I/O
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> @@ -81,6 +85,7 @@ struct virtio_blk {
> 
> 	/* num of vqs */
> 	int num_vqs;
> +	int io_queues[HCTX_MAX_TYPES];
>  	struct virtio_blk_vq *vqs;
...
>  };> @@ -565,6 +572,18 @@ static int init_vq(struct virtio_blk *vblk)
>  			min_not_zero(num_request_queues, nr_cpu_ids),
>  			num_vqs);
> 
> +	num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);
> +
> +	memset(vblk->io_queues, 0, sizeof(int) * HCTX_MAX_TYPES);

Using
    sizeof(vblk->io_queues)
would automatically follow any changes in the definition of that field,
similar to the line below:

...
>  	vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-05 15:09 ` [PATCH v5 1/2] virtio-blk: support polling I/O Suwan Kim
@ 2022-04-06  5:00     ` Christoph Hellwig
  2022-04-06  5:00     ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-04-06  5:00 UTC (permalink / raw)
  To: Suwan Kim
  Cc: mgurtovoy, linux-block, mst, virtualization, hch, stefanha, pbonzini

On Wed, Apr 06, 2022 at 12:09:23AM +0900, Suwan Kim wrote:
> +        for (i = 0; i < num_vqs - num_poll_vqs; i++) {
> +                callbacks[i] = virtblk_done;
> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
> +                names[i] = vblk->vqs[i].name;
> +        }
> +
> +        for (; i < num_vqs; i++) {
> +                callbacks[i] = NULL;
> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req_poll.%d", i);
> +                names[i] = vblk->vqs[i].name;
> +        }

This uses spaces for indentation.

> +		/*
> +		 * Regular queues have interrupts and hence CPU affinity is
> +		 * defined by the core virtio code, but polling queues have
> +		 * no interrupts so we let the block layer assign CPU affinity.
> +		 */
> +		if (i != HCTX_TYPE_POLL)
> +			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
> +		else
> +			blk_mq_map_queues(&set->map[i]);

Nit, but I would have just done a "positive" check here as that is ab it
easier to read:

		if (i == HCTX_TYPE_POLL)
			blk_mq_map_queues(&set->map[i]);
		else
			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] virtio-blk: support polling I/O
@ 2022-04-06  5:00     ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-04-06  5:00 UTC (permalink / raw)
  To: Suwan Kim
  Cc: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch,
	virtualization, linux-block

On Wed, Apr 06, 2022 at 12:09:23AM +0900, Suwan Kim wrote:
> +        for (i = 0; i < num_vqs - num_poll_vqs; i++) {
> +                callbacks[i] = virtblk_done;
> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
> +                names[i] = vblk->vqs[i].name;
> +        }
> +
> +        for (; i < num_vqs; i++) {
> +                callbacks[i] = NULL;
> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req_poll.%d", i);
> +                names[i] = vblk->vqs[i].name;
> +        }

This uses spaces for indentation.

> +		/*
> +		 * Regular queues have interrupts and hence CPU affinity is
> +		 * defined by the core virtio code, but polling queues have
> +		 * no interrupts so we let the block layer assign CPU affinity.
> +		 */
> +		if (i != HCTX_TYPE_POLL)
> +			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
> +		else
> +			blk_mq_map_queues(&set->map[i]);

Nit, but I would have just done a "positive" check here as that is ab it
easier to read:

		if (i == HCTX_TYPE_POLL)
			blk_mq_map_queues(&set->map[i]);
		else
			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs()
  2022-04-05 15:09 ` [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs() Suwan Kim
@ 2022-04-06  5:01     ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-04-06  5:01 UTC (permalink / raw)
  To: Suwan Kim
  Cc: mgurtovoy, linux-block, mst, virtualization, hch, stefanha, pbonzini

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs()
@ 2022-04-06  5:01     ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2022-04-06  5:01 UTC (permalink / raw)
  To: Suwan Kim
  Cc: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch,
	virtualization, linux-block

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-06  1:43   ` Elliott, Robert (Servers)
@ 2022-04-06 13:36     ` Suwan Kim
  0 siblings, 0 replies; 13+ messages in thread
From: Suwan Kim @ 2022-04-06 13:36 UTC (permalink / raw)
  To: Elliott, Robert (Servers)
  Cc: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang, hch,
	virtualization, linux-block

On Wed, Apr 06, 2022 at 01:43:55AM +0000, Elliott, Robert (Servers) wrote:
> 
> 
> > -----Original Message-----
> > From: Suwan Kim <suwan.kim027@gmail.com>
> > Sent: Tuesday, April 5, 2022 10:09 AM
> > Subject: [PATCH v5 1/2] virtio-blk: support polling I/O
> > 
> > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > @@ -81,6 +85,7 @@ struct virtio_blk {
> > 
> > 	/* num of vqs */
> > 	int num_vqs;
> > +	int io_queues[HCTX_MAX_TYPES];
> >  	struct virtio_blk_vq *vqs;
> ...
> >  };> @@ -565,6 +572,18 @@ static int init_vq(struct virtio_blk *vblk)
> >  			min_not_zero(num_request_queues, nr_cpu_ids),
> >  			num_vqs);
> > 
> > +	num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);
> > +
> > +	memset(vblk->io_queues, 0, sizeof(int) * HCTX_MAX_TYPES);
> 
> Using
>     sizeof(vblk->io_queues)
> would automatically follow any changes in the definition of that field,
> similar to the line below:

Thanks for the feedback. I think that memset is unnecessary because
all entries of vblk->io_queues[] is set.
I will remove it.

Regards,
Suwan Kim

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-06  5:00     ` Christoph Hellwig
  (?)
@ 2022-04-06 13:41     ` Suwan Kim
  2022-04-06 14:10       ` Max Gurtovoy
  -1 siblings, 1 reply; 13+ messages in thread
From: Suwan Kim @ 2022-04-06 13:41 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: mst, jasowang, stefanha, pbonzini, mgurtovoy, dongli.zhang,
	virtualization, linux-block

On Tue, Apr 05, 2022 at 10:00:29PM -0700, Christoph Hellwig wrote:
> On Wed, Apr 06, 2022 at 12:09:23AM +0900, Suwan Kim wrote:
> > +        for (i = 0; i < num_vqs - num_poll_vqs; i++) {
> > +                callbacks[i] = virtblk_done;
> > +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
> > +                names[i] = vblk->vqs[i].name;
> > +        }
> > +
> > +        for (; i < num_vqs; i++) {
> > +                callbacks[i] = NULL;
> > +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req_poll.%d", i);
> > +                names[i] = vblk->vqs[i].name;
> > +        }
> 
> This uses spaces for indentation.

Oh my mistake. I will fix it.

> > +		/*
> > +		 * Regular queues have interrupts and hence CPU affinity is
> > +		 * defined by the core virtio code, but polling queues have
> > +		 * no interrupts so we let the block layer assign CPU affinity.
> > +		 */
> > +		if (i != HCTX_TYPE_POLL)
> > +			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
> > +		else
> > +			blk_mq_map_queues(&set->map[i]);
> 
> Nit, but I would have just done a "positive" check here as that is ab it
> easier to read:
> 
> 		if (i == HCTX_TYPE_POLL)
> 			blk_mq_map_queues(&set->map[i]);
> 		else
> 			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);

I agree. I will fix it.
Thanks for the feedback!

Regards,
Suwan Kim

>
> Otherwise looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/2] virtio-blk: support polling I/O
  2022-04-06 13:41     ` Suwan Kim
@ 2022-04-06 14:10       ` Max Gurtovoy
  0 siblings, 0 replies; 13+ messages in thread
From: Max Gurtovoy @ 2022-04-06 14:10 UTC (permalink / raw)
  To: Suwan Kim, Christoph Hellwig
  Cc: mst, jasowang, stefanha, pbonzini, dongli.zhang, virtualization,
	linux-block


On 4/6/2022 4:41 PM, Suwan Kim wrote:
> On Tue, Apr 05, 2022 at 10:00:29PM -0700, Christoph Hellwig wrote:
>> On Wed, Apr 06, 2022 at 12:09:23AM +0900, Suwan Kim wrote:
>>> +        for (i = 0; i < num_vqs - num_poll_vqs; i++) {
>>> +                callbacks[i] = virtblk_done;
>>> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req.%d", i);
>>> +                names[i] = vblk->vqs[i].name;
>>> +        }
>>> +
>>> +        for (; i < num_vqs; i++) {
>>> +                callbacks[i] = NULL;
>>> +                snprintf(vblk->vqs[i].name, VQ_NAME_LEN, "req_poll.%d", i);
>>> +                names[i] = vblk->vqs[i].name;
>>> +        }
>> This uses spaces for indentation.
> Oh my mistake. I will fix it.
>
>>> +		/*
>>> +		 * Regular queues have interrupts and hence CPU affinity is
>>> +		 * defined by the core virtio code, but polling queues have
>>> +		 * no interrupts so we let the block layer assign CPU affinity.
>>> +		 */
>>> +		if (i != HCTX_TYPE_POLL)
>>> +			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
>>> +		else
>>> +			blk_mq_map_queues(&set->map[i]);
>> Nit, but I would have just done a "positive" check here as that is ab it
>> easier to read:
>>
>> 		if (i == HCTX_TYPE_POLL)
>> 			blk_mq_map_queues(&set->map[i]);
>> 		else
>> 			blk_mq_virtio_map_queues(&set->map[i], vblk->vdev, 0);
> I agree. I will fix it.
> Thanks for the feedback!

Looks good with fixing Christoph and Robert comments (memset removal).

Please run checkpatch.pl on the new version to verify we don't have more 
issues such as spaces vs. tabs..

Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>

>
> Regards,
> Suwan Kim
>
>> Otherwise looks good:
>>
>> Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-04-06 16:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-05 15:09 [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs() Suwan Kim
2022-04-05 15:09 ` [PATCH v5 1/2] virtio-blk: support polling I/O Suwan Kim
2022-04-06  1:43   ` Elliott, Robert (Servers)
2022-04-06 13:36     ` Suwan Kim
2022-04-06  5:00   ` Christoph Hellwig
2022-04-06  5:00     ` Christoph Hellwig
2022-04-06 13:41     ` Suwan Kim
2022-04-06 14:10       ` Max Gurtovoy
2022-04-05 15:09 ` [PATCH v5 2/2] virtio-blk: support mq_ops->queue_rqs() Suwan Kim
2022-04-06  5:01   ` Christoph Hellwig
2022-04-06  5:01     ` Christoph Hellwig
2022-04-05 15:46 ` [PATCH v5 0/2] virtio-blk: support polling I/O and mq_ops->queue_rqs() Stefan Hajnoczi
2022-04-05 15:46   ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.