All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org, mst@redhat.com,
	jasowang@redhat.com, pbonzini@redhat.com, stefanha@redhat.com,
	virtualization@lists.linux-foundation.org
Subject: [PATCH 16/16] vhost scsi: multiple worker support
Date: Wed, 07 Oct 2020 20:55:01 +0000	[thread overview]
Message-ID: <1602104101-5592-17-git-send-email-michael.christie@oracle.com> (raw)
In-Reply-To: <1602104101-5592-1-git-send-email-michael.christie@oracle.com>

Create a vhost_worker per IO vq. When using a more than 2 vqs and/or
multiple LUNs per vhost-scsi dev, we hit a bottleneck with the single
worker. The problem is that we want to start and complete all vqs and
all LUNs from the same thread. Combine with the previous patches that
allow us to add more than 2 vqs, we see a IOPs workloads (like 50/50
randrw 4K IOs) go from 150K to 400K where the native device is 500K.
For the lio rd_mcp backend, we see IOPs go for from 400K to 600K.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 4309f97..e5f73c1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1624,6 +1624,22 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
 		memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
 		       sizeof(vs->vs_vhost_wwpn));
 
+		/*
+		 * For compat, have the evt and ctl vqs share worker0 with
+		 * the first IO vq like is setup as default already. Any
+		 * addition vqs will get their own worker.
+		 *
+		 * Note: if we fail later, then the vhost_dev_cleanup call on
+		 * release() will clean up all the workers.
+		 */
+		ret = vhost_workers_create(&vs->dev,
+					   vs->dev.nvqs - VHOST_SCSI_VQ_IO);
+		if (ret) {
+			pr_err("Could not create vhost-scsi workers. Error %d.",
+			       ret);
+			goto undepend;
+		}
+
 		for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
 			if (!vq->initialized)
@@ -1631,6 +1647,7 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
 
 			if (vhost_scsi_setup_vq_cmds(vq, vq->num))
 				goto destroy_vq_cmds;
+			vhost_vq_set_worker(vq, i - VHOST_SCSI_VQ_IO);
 		}
 
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
-- 
1.8.3.1

WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com>
To: martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org, mst@redhat.com,
	jasowang@redhat.com, pbonzini@redhat.com, stefanha@redhat.com,
	virtualization@lists.linux-foundation.org
Subject: [PATCH 16/16] vhost scsi: multiple worker support
Date: Wed,  7 Oct 2020 15:55:01 -0500	[thread overview]
Message-ID: <1602104101-5592-17-git-send-email-michael.christie@oracle.com> (raw)
In-Reply-To: <1602104101-5592-1-git-send-email-michael.christie@oracle.com>

Create a vhost_worker per IO vq. When using a more than 2 vqs and/or
multiple LUNs per vhost-scsi dev, we hit a bottleneck with the single
worker. The problem is that we want to start and complete all vqs and
all LUNs from the same thread. Combine with the previous patches that
allow us to add more than 2 vqs, we see a IOPs workloads (like 50/50
randrw 4K IOs) go from 150K to 400K where the native device is 500K.
For the lio rd_mcp backend, we see IOPs go for from 400K to 600K.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 4309f97..e5f73c1 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1624,6 +1624,22 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
 		memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
 		       sizeof(vs->vs_vhost_wwpn));
 
+		/*
+		 * For compat, have the evt and ctl vqs share worker0 with
+		 * the first IO vq like is setup as default already. Any
+		 * addition vqs will get their own worker.
+		 *
+		 * Note: if we fail later, then the vhost_dev_cleanup call on
+		 * release() will clean up all the workers.
+		 */
+		ret = vhost_workers_create(&vs->dev,
+					   vs->dev.nvqs - VHOST_SCSI_VQ_IO);
+		if (ret) {
+			pr_err("Could not create vhost-scsi workers. Error %d.",
+			       ret);
+			goto undepend;
+		}
+
 		for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
 			if (!vq->initialized)
@@ -1631,6 +1647,7 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
 
 			if (vhost_scsi_setup_vq_cmds(vq, vq->num))
 				goto destroy_vq_cmds;
+			vhost_vq_set_worker(vq, i - VHOST_SCSI_VQ_IO);
 		}
 
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
-- 
1.8.3.1


  parent reply	other threads:[~2020-10-07 20:55 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-07 20:54 [PATCH 00/16 V2] vhost: fix scsi cmd handling and IOPs Mike Christie
2020-10-07 20:54 ` Mike Christie
2020-10-07 20:54 ` [PATCH 01/16] vhost scsi: add lun parser helper Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 02/16] vhost: remove work arg from vhost_work_flush Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 03/16] vhost net: use goto error handling in open Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 04/16] vhost: prep vhost_dev_init users to handle failures Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-08  0:58   ` kernel test robot
2020-10-08  0:58     ` kernel test robot
2020-10-08  0:58     ` kernel test robot
2020-10-08  0:58     ` kernel test robot
2020-10-09 11:41   ` Dan Carpenter
2020-10-09 11:41     ` Dan Carpenter
2020-10-09 11:41     ` Dan Carpenter
2020-10-09 11:41     ` Dan Carpenter
2020-10-09 11:41     ` Dan Carpenter
2020-10-23 15:56     ` Michael S. Tsirkin
2020-10-23 15:56       ` Michael S. Tsirkin
2020-10-23 15:56       ` Michael S. Tsirkin
2020-10-23 15:56       ` Michael S. Tsirkin
2020-10-23 16:21       ` Mike Christie
2020-10-23 16:21         ` Mike Christie
2020-10-23 16:21         ` Mike Christie
2020-10-07 20:54 ` [PATCH 05/16] vhost: move vq iovec allocation to dev init time Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 06/16] vhost: support delayed vq creation Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 07/16] vhost scsi: support delayed IO " Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 08/16] vhost scsi: alloc cmds per vq instead of session Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 09/16] vhost scsi: fix cmd completion race Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 10/16] vhost scsi: Add support for LUN resets Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 11/16] vhost scsi: remove extra flushes Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 12/16] vhost: support multiple worker threads Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-08 17:56   ` Mike Christie
2020-10-08 17:56     ` Mike Christie
2020-10-08 20:26     ` Michael S. Tsirkin
2020-10-08 20:26       ` Michael S. Tsirkin
2020-10-08 20:26       ` Michael S. Tsirkin
2020-10-07 20:54 ` [PATCH 13/16] vhost poll: fix coding style Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-07 20:54 ` [PATCH 14/16] vhost: poll support support multiple workers Mike Christie
2020-10-07 20:54   ` Mike Christie
2020-10-08  0:46   ` kernel test robot
2020-10-08  0:46     ` kernel test robot
2020-10-08  0:46     ` kernel test robot
2020-10-08  0:46     ` kernel test robot
2020-10-23 15:43     ` Michael S. Tsirkin
2020-10-23 15:43       ` Michael S. Tsirkin
2020-10-23 15:43       ` Michael S. Tsirkin
2020-10-23 15:43       ` Michael S. Tsirkin
2020-10-07 20:55 ` [PATCH 15/16] vhost scsi: make completion per vq Mike Christie
2020-10-07 20:55   ` Mike Christie
2020-10-07 20:55 ` Mike Christie [this message]
2020-10-07 20:55   ` [PATCH 16/16] vhost scsi: multiple worker support Mike Christie
2020-10-23 15:46 ` [PATCH 00/16 V2] vhost: fix scsi cmd handling and IOPs Michael S. Tsirkin
2020-10-23 15:46   ` Michael S. Tsirkin
2020-10-23 15:46   ` Michael S. Tsirkin
2020-10-23 16:22   ` Mike Christie
2020-10-23 16:22     ` Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1602104101-5592-17-git-send-email-michael.christie@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.