linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] vhost scsi: fixes and cleanups
@ 2020-09-21 18:23 Mike Christie
  2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

The following patches were made over Linus's tree, but apply to
Martin's 5.10 queue branch and Michael's vhost branch on this tree.
The patches fixes a couple issues with vhost scsi's cmd allocation
and completion handling, add support for LUN RESETs, and perform
some cleanups to the vhost scsi flush code.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-22  1:58   ` Jason Wang
  2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

We must free the vqs array in the open failure path, because
vhost_vdpa_release will not be called.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 3fab94f..3301214 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -808,6 +808,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
 
 err_init_iotlb:
 	vhost_dev_cleanup(&v->vdev);
+	kfree(vqs);
 err:
 	atomic_dec(&v->opened);
 	return r;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
  2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-22  2:02   ` Jason Wang
  2020-09-22  2:45   ` Bart Van Assche
  2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

This adds a helper check if a vq has been setup. The next patches
will use this when we move the vhost scsi cmd preallocation from per
session to per vq. In the per vq case, we only want to allocate cmds
for vqs that have actually been setup and not for all the possible
vqs.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/vhost.c | 9 +++++++++
 drivers/vhost/vhost.h | 1 +
 2 files changed, 10 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index b45519c..5dd9eb1 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
 	spin_lock_init(&call_ctx->ctx_lock);
 }
 
+bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
+{
+	if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
+		return true;
+	else
+		return false;
+}
+EXPORT_SYMBOL_GPL(vhost_vq_is_setup);
+
 static void vhost_vq_reset(struct vhost_dev *dev,
 			   struct vhost_virtqueue *vq)
 {
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 9032d3c..3d30b3d 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -190,6 +190,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *,
 		      struct vhost_log *log, unsigned int *log_num);
 void vhost_discard_vq_desc(struct vhost_virtqueue *, int n);
 
+bool vhost_vq_is_setup(struct vhost_virtqueue *vq);
 int vhost_vq_init_access(struct vhost_virtqueue *);
 int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len);
 int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *heads,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
  2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
  2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-22  1:32   ` kernel test robot
                     ` (2 more replies)
  2020-09-21 18:23 ` [PATCH 4/8] vhost scsi: fix cmd completion race Mike Christie
                   ` (4 subsequent siblings)
  7 siblings, 3 replies; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

We currently are limited to 256 cmds per session. This leads to problems
where if the user has increased virtqueue_size to more than 2 or
cmd_per_lun to more than 256 vhost_scsi_get_tag can fail and the guest
will get IO errors.

This patch moves the cmd allocation to per vq so we can easily match
whatever the user has specified for num_queues and
virtqueue_size/cmd_per_lun. It also makes it easier to control how much
memory we preallocate. For cases, where perf is not as important and
we can use the current defaults (1 vq and 128 cmds per vq) memory use
from preallocate cmds is cut in half. For cases, where we are willing
to use more memory for higher perf, cmd mem use will now increase as
the num queues and queue depth increases.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 204 ++++++++++++++++++++++++++++++++-------------------
 1 file changed, 127 insertions(+), 77 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index b22adf0..13311b8 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -52,7 +52,6 @@
 #define VHOST_SCSI_VERSION  "v0.1"
 #define VHOST_SCSI_NAMELEN 256
 #define VHOST_SCSI_MAX_CDB_SIZE 32
-#define VHOST_SCSI_DEFAULT_TAGS 256
 #define VHOST_SCSI_PREALLOC_SGLS 2048
 #define VHOST_SCSI_PREALLOC_UPAGES 2048
 #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048
@@ -189,6 +188,9 @@ struct vhost_scsi_virtqueue {
 	 * Writers must also take dev mutex and flush under it.
 	 */
 	int inflight_idx;
+	struct vhost_scsi_cmd *scsi_cmds;
+	struct sbitmap scsi_tags;
+	int max_cmds;
 };
 
 struct vhost_scsi {
@@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
 {
 	struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
 				struct vhost_scsi_cmd, tvc_se_cmd);
-	struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess;
+	struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq,
+				struct vhost_scsi_virtqueue, vq);
+	struct vhost_scsi_inflight *inflight = tv_cmd->inflight;
 	int i;
 
 	if (tv_cmd->tvc_sgl_count) {
@@ -336,8 +340,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
 			put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
 	}
 
-	vhost_scsi_put_inflight(tv_cmd->inflight);
-	target_free_tag(se_sess, se_cmd);
+	sbitmap_clear_bit(&svq->scsi_tags, se_cmd->map_tag);
+	vhost_scsi_put_inflight(inflight);
 }
 
 static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
@@ -566,13 +570,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 }
 
 static struct vhost_scsi_cmd *
-vhost_scsi_get_tag(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
+vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
 		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
 		   u32 exp_data_len, int data_direction)
 {
+	struct vhost_scsi_virtqueue *svq = container_of(vq,
+					struct vhost_scsi_virtqueue, vq);
 	struct vhost_scsi_cmd *cmd;
 	struct vhost_scsi_nexus *tv_nexus;
-	struct se_session *se_sess;
 	struct scatterlist *sg, *prot_sg;
 	struct page **pages;
 	int tag, cpu;
@@ -582,15 +587,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
 		return ERR_PTR(-EIO);
 	}
-	se_sess = tv_nexus->tvn_se_sess;
 
-	tag = sbitmap_queue_get(&se_sess->sess_tag_pool, &cpu);
+	tag = sbitmap_get(&svq->scsi_tags, 0, false);
 	if (tag < 0) {
 		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
 		return ERR_PTR(-ENOMEM);
 	}
 
-	cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[tag];
+	cmd = &svq->scsi_cmds[tag];
 	sg = cmd->tvc_sgl;
 	prot_sg = cmd->tvc_prot_sgl;
 	pages = cmd->tvc_upages;
@@ -1065,11 +1069,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
 				scsi_command_size(cdb), VHOST_SCSI_MAX_CDB_SIZE);
 				goto err;
 		}
-		cmd = vhost_scsi_get_tag(vq, tpg, cdb, tag, lun, task_attr,
+		cmd = vhost_scsi_get_cmd(vq, tpg, cdb, tag, lun, task_attr,
 					 exp_data_len + prot_bytes,
 					 data_direction);
 		if (IS_ERR(cmd)) {
-			vq_err(vq, "vhost_scsi_get_tag failed %ld\n",
+			vq_err(vq, "vhost_scsi_get_cmd failed %ld\n",
 			       PTR_ERR(cmd));
 			goto err;
 		}
@@ -1373,6 +1377,83 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 		wait_for_completion(&old_inflight[i]->comp);
 }
 
+static void vhost_scsi_destroy_vq_cmds(struct vhost_virtqueue *vq)
+{
+	struct vhost_scsi_virtqueue *svq = container_of(vq,
+					struct vhost_scsi_virtqueue, vq);
+	struct vhost_scsi_cmd *tv_cmd;
+	unsigned int i;
+
+	if (!svq->scsi_cmds)
+		return;
+
+	for (i = 0; i < svq->max_cmds; i++) {
+		tv_cmd = &svq->scsi_cmds[i];
+
+		kfree(tv_cmd->tvc_sgl);
+		kfree(tv_cmd->tvc_prot_sgl);
+		kfree(tv_cmd->tvc_upages);
+	}
+
+	sbitmap_free(&svq->scsi_tags);
+	kfree(svq->scsi_cmds);
+	svq->scsi_cmds = NULL;
+}
+
+static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
+{
+	struct vhost_scsi_virtqueue *svq = container_of(vq,
+					struct vhost_scsi_virtqueue, vq);
+	struct vhost_scsi_cmd *tv_cmd;
+	unsigned int i;
+
+	if (svq->scsi_cmds)
+		return 0;
+
+	if (sbitmap_init_node(&svq->scsi_tags, max_cmds, -1, GFP_KERNEL,
+			      NUMA_NO_NODE))
+		return -ENOMEM;
+	svq->max_cmds = max_cmds;
+
+	svq->scsi_cmds = kcalloc(max_cmds, sizeof(*tv_cmd), GFP_KERNEL);
+	if (!svq->scsi_cmds) {
+		sbitmap_free(&svq->scsi_tags);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < max_cmds; i++) {
+		tv_cmd = &svq->scsi_cmds[i];
+
+		tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
+					  sizeof(struct scatterlist),
+					  GFP_KERNEL);
+		if (!tv_cmd->tvc_sgl) {
+			pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
+			goto out;
+		}
+
+		tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
+					     sizeof(struct page *),
+					     GFP_KERNEL);
+		if (!tv_cmd->tvc_upages) {
+			pr_err("Unable to allocate tv_cmd->tvc_upages\n");
+			goto out;
+		}
+
+		tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
+					       sizeof(struct scatterlist),
+					       GFP_KERNEL);
+		if (!tv_cmd->tvc_prot_sgl) {
+			pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
+			goto out;
+		}
+	}
+	return 0;
+out:
+	vhost_scsi_destroy_vq_cmds(vq);
+	return -ENOMEM;
+}
+
 /*
  * Called from vhost_scsi_ioctl() context to walk the list of available
  * vhost_scsi_tpg with an active struct vhost_scsi_nexus
@@ -1427,10 +1508,9 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 
 		if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
 			if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
-				kfree(vs_tpg);
 				mutex_unlock(&tpg->tv_tpg_mutex);
 				ret = -EEXIST;
-				goto out;
+				goto undepend;
 			}
 			/*
 			 * In order to ensure individual vhost-scsi configfs
@@ -1442,9 +1522,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 			ret = target_depend_item(&se_tpg->tpg_group.cg_item);
 			if (ret) {
 				pr_warn("target_depend_item() failed: %d\n", ret);
-				kfree(vs_tpg);
 				mutex_unlock(&tpg->tv_tpg_mutex);
-				goto out;
+				goto undepend;
 			}
 			tpg->tv_tpg_vhost_count++;
 			tpg->vhost_scsi = vs;
@@ -1457,6 +1536,16 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 	if (match) {
 		memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
 		       sizeof(vs->vs_vhost_wwpn));
+
+		for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
+			vq = &vs->vqs[i].vq;
+			if (!vhost_vq_is_setup(vq))
+				continue;
+
+			if (vhost_scsi_setup_vq_cmds(vq, vq->num))
+				goto destroy_vq_cmds;
+		}
+
 		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
 			vq = &vs->vqs[i].vq;
 			mutex_lock(&vq->mutex);
@@ -1476,7 +1565,22 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 	vhost_scsi_flush(vs);
 	kfree(vs->vs_tpg);
 	vs->vs_tpg = vs_tpg;
+	goto out;
 
+destroy_vq_cmds:
+	for (i--; i >= VHOST_SCSI_VQ_IO; i--) {
+		if (!vhost_vq_get_backend(&vs->vqs[i].vq))
+			vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq);
+	}
+undepend:
+	for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
+		tpg = vs_tpg[i];
+		if (tpg) {
+			tpg->tv_tpg_vhost_count--;
+			target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
+		}
+	}
+	kfree(vs_tpg);
 out:
 	mutex_unlock(&vs->dev.mutex);
 	mutex_unlock(&vhost_scsi_mutex);
@@ -1549,6 +1653,12 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 			mutex_lock(&vq->mutex);
 			vhost_vq_set_backend(vq, NULL);
 			mutex_unlock(&vq->mutex);
+			/*
+			 * Make sure cmds are not running before tearing them
+			 * down.
+			 */
+			vhost_scsi_flush(vs);
+			vhost_scsi_destroy_vq_cmds(vq);
 		}
 	}
 	/*
@@ -1842,23 +1952,6 @@ static void vhost_scsi_port_unlink(struct se_portal_group *se_tpg,
 	mutex_unlock(&vhost_scsi_mutex);
 }
 
-static void vhost_scsi_free_cmd_map_res(struct se_session *se_sess)
-{
-	struct vhost_scsi_cmd *tv_cmd;
-	unsigned int i;
-
-	if (!se_sess->sess_cmd_map)
-		return;
-
-	for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
-		tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
-
-		kfree(tv_cmd->tvc_sgl);
-		kfree(tv_cmd->tvc_prot_sgl);
-		kfree(tv_cmd->tvc_upages);
-	}
-}
-
 static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_store(
 		struct config_item *item, const char *page, size_t count)
 {
@@ -1898,45 +1991,6 @@ static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_show(
 	NULL,
 };
 
-static int vhost_scsi_nexus_cb(struct se_portal_group *se_tpg,
-			       struct se_session *se_sess, void *p)
-{
-	struct vhost_scsi_cmd *tv_cmd;
-	unsigned int i;
-
-	for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
-		tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
-
-		tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
-					  sizeof(struct scatterlist),
-					  GFP_KERNEL);
-		if (!tv_cmd->tvc_sgl) {
-			pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
-			goto out;
-		}
-
-		tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
-					     sizeof(struct page *),
-					     GFP_KERNEL);
-		if (!tv_cmd->tvc_upages) {
-			pr_err("Unable to allocate tv_cmd->tvc_upages\n");
-			goto out;
-		}
-
-		tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
-					       sizeof(struct scatterlist),
-					       GFP_KERNEL);
-		if (!tv_cmd->tvc_prot_sgl) {
-			pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
-			goto out;
-		}
-	}
-	return 0;
-out:
-	vhost_scsi_free_cmd_map_res(se_sess);
-	return -ENOMEM;
-}
-
 static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
 				const char *name)
 {
@@ -1960,12 +2014,9 @@ static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
 	 * struct se_node_acl for the vhost_scsi struct se_portal_group with
 	 * the SCSI Initiator port name of the passed configfs group 'name'.
 	 */
-	tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg,
-					VHOST_SCSI_DEFAULT_TAGS,
-					sizeof(struct vhost_scsi_cmd),
+	tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg, 0, 0,
 					TARGET_PROT_DIN_PASS | TARGET_PROT_DOUT_PASS,
-					(unsigned char *)name, tv_nexus,
-					vhost_scsi_nexus_cb);
+					(unsigned char *)name, tv_nexus, NULL);
 	if (IS_ERR(tv_nexus->tvn_se_sess)) {
 		mutex_unlock(&tpg->tv_tpg_mutex);
 		kfree(tv_nexus);
@@ -2015,7 +2066,6 @@ static int vhost_scsi_drop_nexus(struct vhost_scsi_tpg *tpg)
 		" %s Initiator Port: %s\n", vhost_scsi_dump_proto_id(tpg->tport),
 		tv_nexus->tvn_se_sess->se_node_acl->initiatorname);
 
-	vhost_scsi_free_cmd_map_res(se_sess);
 	/*
 	 * Release the SCSI I_T Nexus to the emulated vhost Target Port
 	 */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/8] vhost scsi: fix cmd completion race
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
                   ` (2 preceding siblings ...)
  2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-22  2:48   ` Bart Van Assche
  2020-09-21 18:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Mike Christie
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
If the last put happens a little later then we could race where
vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.

This patch has us delay completing the cmd until the last lio core ref
is dropped. We then know that once we signal to the guest that the cmd
is completed that if it queues a new command it will find a free cmd.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 42 +++++++++++++++---------------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 13311b8..26d0f75 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -322,7 +322,7 @@ static u32 vhost_scsi_tpg_get_inst_index(struct se_portal_group *se_tpg)
 	return 1;
 }
 
-static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
+static void vhost_scsi_release_cmd_res(struct se_cmd *se_cmd)
 {
 	struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
 				struct vhost_scsi_cmd, tvc_se_cmd);
@@ -344,6 +344,16 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
 	vhost_scsi_put_inflight(inflight);
 }
 
+static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
+{
+	struct vhost_scsi_cmd *cmd = container_of(se_cmd,
+					struct vhost_scsi_cmd, tvc_se_cmd);
+	struct vhost_scsi *vs = cmd->tvc_vhost;
+
+	llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
+	vhost_work_queue(&vs->dev, &vs->vs_completion_work);
+}
+
 static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
 {
 	return 0;
@@ -366,28 +376,15 @@ static int vhost_scsi_get_cmd_state(struct se_cmd *se_cmd)
 	return 0;
 }
 
-static void vhost_scsi_complete_cmd(struct vhost_scsi_cmd *cmd)
-{
-	struct vhost_scsi *vs = cmd->tvc_vhost;
-
-	llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
-
-	vhost_work_queue(&vs->dev, &vs->vs_completion_work);
-}
-
 static int vhost_scsi_queue_data_in(struct se_cmd *se_cmd)
 {
-	struct vhost_scsi_cmd *cmd = container_of(se_cmd,
-				struct vhost_scsi_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(cmd);
+	transport_generic_free_cmd(se_cmd, 0);
 	return 0;
 }
 
 static int vhost_scsi_queue_status(struct se_cmd *se_cmd)
 {
-	struct vhost_scsi_cmd *cmd = container_of(se_cmd,
-				struct vhost_scsi_cmd, tvc_se_cmd);
-	vhost_scsi_complete_cmd(cmd);
+	transport_generic_free_cmd(se_cmd, 0);
 	return 0;
 }
 
@@ -433,15 +430,6 @@ static void vhost_scsi_free_evt(struct vhost_scsi *vs, struct vhost_scsi_evt *ev
 	return evt;
 }
 
-static void vhost_scsi_free_cmd(struct vhost_scsi_cmd *cmd)
-{
-	struct se_cmd *se_cmd = &cmd->tvc_se_cmd;
-
-	/* TODO locking against target/backend threads? */
-	transport_generic_free_cmd(se_cmd, 0);
-
-}
-
 static int vhost_scsi_check_stop_free(struct se_cmd *se_cmd)
 {
 	return target_put_sess_cmd(se_cmd);
@@ -560,7 +548,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
 		} else
 			pr_err("Faulted on virtio_scsi_cmd_resp\n");
 
-		vhost_scsi_free_cmd(cmd);
+		vhost_scsi_release_cmd_res(se_cmd);
 	}
 
 	vq = -1;
@@ -1092,7 +1080,7 @@ static void vhost_scsi_submission_work(struct work_struct *work)
 						      &prot_iter, exp_data_len,
 						      &data_iter))) {
 				vq_err(vq, "Failed to map iov to sgl\n");
-				vhost_scsi_release_cmd(&cmd->tvc_se_cmd);
+				vhost_scsi_release_cmd_res(&cmd->tvc_se_cmd);
 				goto err;
 			}
 		}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/8] vhost scsi: add lun parser helper
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
                   ` (3 preceding siblings ...)
  2020-09-21 18:23 ` [PATCH 4/8] vhost scsi: fix cmd completion race Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-23 10:23   ` Paolo Bonzini
  2020-09-21 18:23 ` [PATCH 6/8] vhost scsi: Add support for LUN resets Mike Christie
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

Move code to parse lun from req's lun_buf to helper, so tmf code
can use it in the next patch.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 26d0f75..736ce19 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -899,6 +899,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
 	return ret;
 }
 
+static u16 vhost_buf_to_lun(u8 *lun_buf)
+{
+	return ((lun_buf[2] << 8) | lun_buf[3]) & 0x3FFF;
+}
+
 static void
 vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
@@ -1037,12 +1042,12 @@ static void vhost_scsi_submission_work(struct work_struct *work)
 			tag = vhost64_to_cpu(vq, v_req_pi.tag);
 			task_attr = v_req_pi.task_attr;
 			cdb = &v_req_pi.cdb[0];
-			lun = ((v_req_pi.lun[2] << 8) | v_req_pi.lun[3]) & 0x3FFF;
+			lun = vhost_buf_to_lun(v_req_pi.lun);
 		} else {
 			tag = vhost64_to_cpu(vq, v_req.tag);
 			task_attr = v_req.task_attr;
 			cdb = &v_req.cdb[0];
-			lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
+			lun = vhost_buf_to_lun(v_req.lun);
 		}
 		/*
 		 * Check that the received CDB size does not exceeded our
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/8] vhost scsi: Add support for LUN resets.
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
                   ` (4 preceding siblings ...)
  2020-09-21 18:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-21 18:23 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Mike Christie
  2020-09-21 18:23 ` [PATCH 8/8] vhost scsi: remove extra flushes Mike Christie
  7 siblings, 0 replies; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

In newer versions of virtio-scsi we just reset the timer when an a
command times out, so TMFs are never sent for the cmd time out case.
However, in older kernels and for the TMF inject cases, we can still get
resets and we end up just failing immediately so the guest might see the
device get offlined and IO errors.

For the older kernel cases, we want the same end result as the
modern virtio-scsi driver where we let the lower levels fire their error
handling and handle the problem. And at the upper levels we want to
wait. This patch ties the LUN reset handling into the LIO TMF code which
will just wait for outstanding commands to complete like we are doing in
the modern virtio-scsi case.

Note: I did not handle the ABORT case to keep this simple. For ABORTs
LIO just waits on the cmd like how it does for the RESET case. If
an ABORT fails, the guest OS ends up escalating to LUN RESET, so in
the end we get the same behavior where we wait on the outstanding
cmds.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 147 ++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 134 insertions(+), 13 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 736ce19..8791db8 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -139,6 +139,7 @@ struct vhost_scsi_tpg {
 	struct se_portal_group se_tpg;
 	/* Pointer back to vhost_scsi, protected by tv_tpg_mutex */
 	struct vhost_scsi *vhost_scsi;
+	struct list_head tmf_queue;
 };
 
 struct vhost_scsi_tport {
@@ -211,6 +212,20 @@ struct vhost_scsi {
 	int vs_events_nr; /* num of pending events, protected by vq->mutex */
 };
 
+struct vhost_scsi_tmf {
+	struct vhost_work vwork;
+	struct vhost_scsi_tpg *tpg;
+	struct vhost_scsi *vhost;
+	struct vhost_scsi_virtqueue *svq;
+	struct list_head queue_entry;
+
+	struct se_cmd se_cmd;
+	struct vhost_scsi_inflight *inflight;
+	struct iovec resp_iov;
+	int in_iovs;
+	int vq_desc;
+};
+
 /*
  * Context for processing request and control queue operations.
  */
@@ -344,14 +359,32 @@ static void vhost_scsi_release_cmd_res(struct se_cmd *se_cmd)
 	vhost_scsi_put_inflight(inflight);
 }
 
+static void vhost_scsi_release_tmf_res(struct vhost_scsi_tmf *tmf)
+{
+	struct vhost_scsi_tpg *tpg = tmf->tpg;
+	struct vhost_scsi_inflight *inflight = tmf->inflight;
+
+	mutex_lock(&tpg->tv_tpg_mutex);
+	list_add_tail(&tpg->tmf_queue, &tmf->queue_entry);
+	mutex_unlock(&tpg->tv_tpg_mutex);
+	vhost_scsi_put_inflight(inflight);
+}
+
 static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
 {
-	struct vhost_scsi_cmd *cmd = container_of(se_cmd,
+	if (se_cmd->se_cmd_flags & SCF_SCSI_TMR_CDB) {
+		struct vhost_scsi_tmf *tmf = container_of(se_cmd,
+					struct vhost_scsi_tmf, se_cmd);
+
+		vhost_work_queue(&tmf->vhost->dev, &tmf->vwork);
+	} else {
+		struct vhost_scsi_cmd *cmd = container_of(se_cmd,
 					struct vhost_scsi_cmd, tvc_se_cmd);
-	struct vhost_scsi *vs = cmd->tvc_vhost;
+		struct vhost_scsi *vs = cmd->tvc_vhost;
 
-	llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
-	vhost_work_queue(&vs->dev, &vs->vs_completion_work);
+		llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list);
+		vhost_work_queue(&vs->dev, &vs->vs_completion_work);
+	}
 }
 
 static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
@@ -390,7 +423,10 @@ static int vhost_scsi_queue_status(struct se_cmd *se_cmd)
 
 static void vhost_scsi_queue_tm_rsp(struct se_cmd *se_cmd)
 {
-	return;
+	struct vhost_scsi_tmf *tmf = container_of(se_cmd, struct vhost_scsi_tmf,
+						  se_cmd);
+
+	transport_generic_free_cmd(&tmf->se_cmd, 0);
 }
 
 static void vhost_scsi_aborted_task(struct se_cmd *se_cmd)
@@ -1121,9 +1157,9 @@ static u16 vhost_buf_to_lun(u8 *lun_buf)
 }
 
 static void
-vhost_scsi_send_tmf_reject(struct vhost_scsi *vs,
-			   struct vhost_virtqueue *vq,
-			   struct vhost_scsi_ctx *vc)
+vhost_scsi_send_tmf_resp(struct vhost_scsi *vs, struct vhost_virtqueue *vq,
+			 int in_iovs, int vq_desc, struct iovec *resp_iov,
+			 int tmf_resp_code)
 {
 	struct virtio_scsi_ctrl_tmf_resp rsp;
 	struct iov_iter iov_iter;
@@ -1131,17 +1167,87 @@ static u16 vhost_buf_to_lun(u8 *lun_buf)
 
 	pr_debug("%s\n", __func__);
 	memset(&rsp, 0, sizeof(rsp));
-	rsp.response = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+	rsp.response = tmf_resp_code;
 
-	iov_iter_init(&iov_iter, READ, &vq->iov[vc->out], vc->in, sizeof(rsp));
+	iov_iter_init(&iov_iter, READ, resp_iov, in_iovs, sizeof(rsp));
 
 	ret = copy_to_iter(&rsp, sizeof(rsp), &iov_iter);
 	if (likely(ret == sizeof(rsp)))
-		vhost_add_used_and_signal(&vs->dev, vq, vc->head, 0);
+		vhost_add_used_and_signal(&vs->dev, vq, vq_desc, 0);
 	else
 		pr_err("Faulted on virtio_scsi_ctrl_tmf_resp\n");
 }
 
+static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
+{
+	struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf,
+						  vwork);
+	int resp_code;
+
+	if (tmf->se_cmd.se_tmr_req->response == TMR_FUNCTION_COMPLETE)
+		resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED;
+	else
+		resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+
+	vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
+				 tmf->vq_desc, &tmf->resp_iov, resp_code);
+	vhost_scsi_release_tmf_res(tmf);
+}
+
+static void
+vhost_scsi_handle_tmf(struct vhost_scsi *vs, struct vhost_scsi_tpg *tpg,
+		      struct vhost_virtqueue *vq,
+		      struct virtio_scsi_ctrl_tmf_req *vtmf,
+		      struct vhost_scsi_ctx *vc)
+{
+	struct vhost_scsi_virtqueue *svq = container_of(vq,
+					struct vhost_scsi_virtqueue, vq);
+	struct vhost_scsi_tmf *tmf;
+
+	if (vhost32_to_cpu(vq, vtmf->subtype) !=
+	    VIRTIO_SCSI_T_TMF_LOGICAL_UNIT_RESET)
+		goto send_reject;
+
+	if (!tpg->tpg_nexus || !tpg->tpg_nexus->tvn_se_sess) {
+		pr_err("Unable to locate active struct vhost_scsi_nexus for LUN RESET.\n");
+		goto send_reject;
+        }
+
+	mutex_lock(&tpg->tv_tpg_mutex);
+	if (list_empty(&tpg->tmf_queue)) {
+		pr_err("Missing reserve TMF. Could not handle LUN RESET.\n");
+		mutex_unlock(&tpg->tv_tpg_mutex);
+		goto send_reject;
+	}
+
+	tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf,
+			       queue_entry);
+	list_del_init(&tmf->queue_entry);
+	mutex_unlock(&tpg->tv_tpg_mutex);
+
+	tmf->tpg = tpg;
+	tmf->vhost = vs;
+	tmf->svq = svq;
+	tmf->resp_iov = vq->iov[vc->out];
+	tmf->vq_desc = vc->head;
+	tmf->in_iovs = vc->in;
+	tmf->inflight = vhost_scsi_get_inflight(vq);
+
+	if (target_submit_tmr(&tmf->se_cmd, tpg->tpg_nexus->tvn_se_sess, NULL,
+			      vhost_buf_to_lun(vtmf->lun), NULL,
+			      TMR_LUN_RESET, GFP_KERNEL, 0,
+			      TARGET_SCF_ACK_KREF) < 0) {
+		vhost_scsi_release_tmf_res(tmf);
+		goto send_reject;
+	}
+
+	return;
+
+send_reject:
+	vhost_scsi_send_tmf_resp(vs, vq, vc->in, vc->head, &vq->iov[vc->out],
+				 VIRTIO_SCSI_S_FUNCTION_REJECTED);
+}
+
 static void
 vhost_scsi_send_an_resp(struct vhost_scsi *vs,
 			struct vhost_virtqueue *vq,
@@ -1167,6 +1273,7 @@ static u16 vhost_buf_to_lun(u8 *lun_buf)
 static void
 vhost_scsi_ctl_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
 {
+	struct vhost_scsi_tpg *tpg;
 	union {
 		__virtio32 type;
 		struct virtio_scsi_ctrl_an_req an;
@@ -1248,12 +1355,12 @@ static u16 vhost_buf_to_lun(u8 *lun_buf)
 		vc.req += typ_size;
 		vc.req_size -= typ_size;
 
-		ret = vhost_scsi_get_req(vq, &vc, NULL);
+		ret = vhost_scsi_get_req(vq, &vc, &tpg);
 		if (ret)
 			goto err;
 
 		if (v_req.type == VIRTIO_SCSI_T_TMF)
-			vhost_scsi_send_tmf_reject(vs, vq, &vc);
+			vhost_scsi_handle_tmf(vs, tpg, vq, &v_req.tmf, &vc);
 		else
 			vhost_scsi_send_an_resp(vs, vq, &vc);
 err:
@@ -1914,11 +2021,19 @@ static int vhost_scsi_port_link(struct se_portal_group *se_tpg,
 {
 	struct vhost_scsi_tpg *tpg = container_of(se_tpg,
 				struct vhost_scsi_tpg, se_tpg);
+	struct vhost_scsi_tmf *tmf;
+
+	tmf = kzalloc(sizeof(*tmf), GFP_KERNEL);
+	if (!tmf)
+		return -ENOMEM;
+	INIT_LIST_HEAD(&tmf->queue_entry);
+	vhost_work_init(&tmf->vwork, vhost_scsi_tmf_resp_work);
 
 	mutex_lock(&vhost_scsi_mutex);
 
 	mutex_lock(&tpg->tv_tpg_mutex);
 	tpg->tv_tpg_port_count++;
+	list_add_tail(&tmf->queue_entry, &tpg->tmf_queue);
 	mutex_unlock(&tpg->tv_tpg_mutex);
 
 	vhost_scsi_hotplug(tpg, lun);
@@ -1933,11 +2048,16 @@ static void vhost_scsi_port_unlink(struct se_portal_group *se_tpg,
 {
 	struct vhost_scsi_tpg *tpg = container_of(se_tpg,
 				struct vhost_scsi_tpg, se_tpg);
+	struct vhost_scsi_tmf *tmf;
 
 	mutex_lock(&vhost_scsi_mutex);
 
 	mutex_lock(&tpg->tv_tpg_mutex);
 	tpg->tv_tpg_port_count--;
+	tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf,
+			       queue_entry);
+	list_del(&tmf->queue_entry);
+	kfree(tmf);
 	mutex_unlock(&tpg->tv_tpg_mutex);
 
 	vhost_scsi_hotunplug(tpg, lun);
@@ -2198,6 +2318,7 @@ static ssize_t vhost_scsi_tpg_nexus_store(struct config_item *item,
 	}
 	mutex_init(&tpg->tv_tpg_mutex);
 	INIT_LIST_HEAD(&tpg->tv_tpg_list);
+	INIT_LIST_HEAD(&tpg->tmf_queue);
 	tpg->tport = tport;
 	tpg->tport_tpgt = tpgt;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/8] vhost: remove work arg from vhost_work_flush
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
                   ` (5 preceding siblings ...)
  2020-09-21 18:23 ` [PATCH 6/8] vhost scsi: Add support for LUN resets Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  2020-09-22  2:01   ` Jason Wang
  2020-09-21 18:23 ` [PATCH 8/8] vhost scsi: remove extra flushes Mike Christie
  7 siblings, 1 reply; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

vhost_work_flush doesn't do anything with the work arg. This patch drops
it and then renames vhost_work_flush to vhost_work_dev_flush to reflect
that the function flushes all the works in the dev and not just a
specific queue or work item.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c  | 4 ++--
 drivers/vhost/vhost.c | 8 ++++----
 drivers/vhost/vhost.h | 2 +-
 drivers/vhost/vsock.c | 2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 8791db8..5833059 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1469,8 +1469,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 	/* Flush both the vhost poll and vhost work */
 	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
 		vhost_scsi_flush_vq(vs, i);
-	vhost_work_flush(&vs->dev, &vs->vs_completion_work);
-	vhost_work_flush(&vs->dev, &vs->vs_event_work);
+	vhost_work_dev_flush(&vs->dev);
+	vhost_work_dev_flush(&vs->dev);
 
 	/* Wait for all reqs issued before the flush to be finished */
 	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 5dd9eb1..f83674e 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
 }
 EXPORT_SYMBOL_GPL(vhost_poll_stop);
 
-void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
+void vhost_work_dev_flush(struct vhost_dev *dev)
 {
 	struct vhost_flush_struct flush;
 
@@ -243,13 +243,13 @@ void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
 		wait_for_completion(&flush.wait_event);
 	}
 }
-EXPORT_SYMBOL_GPL(vhost_work_flush);
+EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
 
 /* Flush any work that has been scheduled. When calling this, don't hold any
  * locks that are also used by the callback. */
 void vhost_poll_flush(struct vhost_poll *poll)
 {
-	vhost_work_flush(poll->dev, &poll->work);
+	vhost_work_dev_flush(poll->dev);
 }
 EXPORT_SYMBOL_GPL(vhost_poll_flush);
 
@@ -542,7 +542,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
 	attach.owner = current;
 	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
 	vhost_work_queue(dev, &attach.work);
-	vhost_work_flush(dev, &attach.work);
+	vhost_work_dev_flush(dev);
 	return attach.ret;
 }
 
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 3d30b3d..b91efb5 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -46,7 +46,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
 void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_flush(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
-void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
+void vhost_work_dev_flush(struct vhost_dev *dev);
 long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp);
 
 struct vhost_log {
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index a483cec..f40205f 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -652,7 +652,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
 		if (vsock->vqs[i].handle_kick)
 			vhost_poll_flush(&vsock->vqs[i].poll);
-	vhost_work_flush(&vsock->dev, &vsock->send_pkt_work);
+	vhost_work_dev_flush(&vsock->dev);
 }
 
 static void vhost_vsock_reset_orphans(struct sock *sk)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/8] vhost scsi: remove extra flushes
  2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
                   ` (6 preceding siblings ...)
  2020-09-21 18:23 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Mike Christie
@ 2020-09-21 18:23 ` Mike Christie
  7 siblings, 0 replies; 23+ messages in thread
From: Mike Christie @ 2020-09-21 18:23 UTC (permalink / raw)
  To: martin.petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization

The vhost work flush function was flushing the entire work queue, so
there is no need for the double vhost_work_dev_flush calls in
vhost_scsi_flush.

And we do not need to call vhost_poll_flush for each poller because
that call also ends up flushing the same work queue thread the
vhost_work_dev_flush call flushed.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 5833059..cdc8364 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1444,11 +1444,6 @@ static void vhost_scsi_handle_kick(struct vhost_work *work)
 	vhost_scsi_handle_vq(vs, vq);
 }
 
-static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index)
-{
-	vhost_poll_flush(&vs->vqs[index].vq.poll);
-}
-
 /* Callers must hold dev mutex */
 static void vhost_scsi_flush(struct vhost_scsi *vs)
 {
@@ -1467,9 +1462,6 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 		kref_put(&old_inflight[i]->kref, vhost_scsi_done_inflight);
 
 	/* Flush both the vhost poll and vhost work */
-	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
-		vhost_scsi_flush_vq(vs, i);
-	vhost_work_dev_flush(&vs->dev);
 	vhost_work_dev_flush(&vs->dev);
 
 	/* Wait for all reqs issued before the flush to be finished */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
  2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
@ 2020-09-22  1:32   ` kernel test robot
  2020-09-23  7:53   ` Dan Carpenter
  2020-09-24  6:22   ` Michael S. Tsirkin
  2 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2020-09-22  1:32 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	jasowang, pbonzini, stefanha, virtualization
  Cc: kbuild-all, clang-built-linux

[-- Attachment #1: Type: text/plain, Size: 13984 bytes --]

Hi Mike,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on vhost/linux-next]
[also build test WARNING on v5.9-rc6 next-20200921]
[cannot apply to target/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: s390-randconfig-r022-20200920 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 4e8c028158b56d9c2142a62464e8e0686bde3584)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
           ___constant_swab32(x) :                 \
                              ^
   include/uapi/linux/swab.h:20:12: note: expanded from macro '___constant_swab32'
           (((__u32)(x) & (__u32)0x0000ff00UL) <<  8) |            \
                     ^
   In file included from drivers/vhost/scsi.c:45:
   In file included from include/uapi/linux/vhost.h:14:
   In file included from include/uapi/linux/vhost_types.h:16:
   In file included from include/linux/virtio_config.h:7:
   In file included from include/linux/virtio.h:7:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:72:
   include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
           ___constant_swab32(x) :                 \
                              ^
   include/uapi/linux/swab.h:21:12: note: expanded from macro '___constant_swab32'
           (((__u32)(x) & (__u32)0x00ff0000UL) >>  8) |            \
                     ^
   In file included from drivers/vhost/scsi.c:45:
   In file included from include/uapi/linux/vhost.h:14:
   In file included from include/uapi/linux/vhost_types.h:16:
   In file included from include/linux/virtio_config.h:7:
   In file included from include/linux/virtio.h:7:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:72:
   include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:119:21: note: expanded from macro '__swab32'
           ___constant_swab32(x) :                 \
                              ^
   include/uapi/linux/swab.h:22:12: note: expanded from macro '___constant_swab32'
           (((__u32)(x) & (__u32)0xff000000UL) >> 24)))
                     ^
   In file included from drivers/vhost/scsi.c:45:
   In file included from include/uapi/linux/vhost.h:14:
   In file included from include/uapi/linux/vhost_types.h:16:
   In file included from include/linux/virtio_config.h:7:
   In file included from include/linux/virtio.h:7:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:72:
   include/asm-generic/io.h:490:45: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu(__raw_readl(PCI_IOBASE + addr));
                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:120:12: note: expanded from macro '__swab32'
           __fswab32(x))
                     ^
   In file included from drivers/vhost/scsi.c:45:
   In file included from include/uapi/linux/vhost.h:14:
   In file included from include/uapi/linux/vhost_types.h:16:
   In file included from include/linux/virtio_config.h:7:
   In file included from include/linux/virtio.h:7:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:72:
   include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:511:46: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew(cpu_to_le16(value), PCI_IOBASE + addr);
                                            ~~~~~~~~~~ ^
   include/asm-generic/io.h:521:46: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel(cpu_to_le32(value), PCI_IOBASE + addr);
                                            ~~~~~~~~~~ ^
   include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsb(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsw(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsl(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesb(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesw(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesl(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
>> drivers/vhost/scsi.c:606:28: warning: variable 'cpu' is uninitialized when used here [-Wuninitialized]
           cmd->tvc_se_cmd.map_cpu = cpu;
                                     ^~~
   drivers/vhost/scsi.c:583:14: note: initialize the variable 'cpu' to silence this warning
           int tag, cpu;
                       ^
                        = 0
   21 warnings generated.

# https://github.com/0day-ci/linux/commit/aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
git checkout aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
vim +/cpu +606 drivers/vhost/scsi.c

057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  571  
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  572  static struct vhost_scsi_cmd *
aef0e1e9298ab6 drivers/vhost/scsi.c      Mike Christie      2020-09-21  573  vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  574  		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  575  		   u32 exp_data_len, int data_direction)
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  576  {
aef0e1e9298ab6 drivers/vhost/scsi.c      Mike Christie      2020-09-21  577  	struct vhost_scsi_virtqueue *svq = container_of(vq,
aef0e1e9298ab6 drivers/vhost/scsi.c      Mike Christie      2020-09-21  578  					struct vhost_scsi_virtqueue, vq);
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  579  	struct vhost_scsi_cmd *cmd;
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  580  	struct vhost_scsi_nexus *tv_nexus;
b1935f687bb93b drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  581  	struct scatterlist *sg, *prot_sg;
3aee26b4ae9104 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  582  	struct page **pages;
10e9cbb6b53111 drivers/vhost/scsi.c      Matthew Wilcox     2018-06-12  583  	int tag, cpu;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  584  
9871831283e795 drivers/vhost/scsi.c      Asias He           2013-05-06  585  	tv_nexus = tpg->tpg_nexus;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  586  	if (!tv_nexus) {
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  587  		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  588  		return ERR_PTR(-EIO);
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  589  	}
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  590  
aef0e1e9298ab6 drivers/vhost/scsi.c      Mike Christie      2020-09-21  591  	tag = sbitmap_get(&svq->scsi_tags, 0, false);
4a47d3a1ff10e5 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  592  	if (tag < 0) {
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  593  		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
4a47d3a1ff10e5 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  594  		return ERR_PTR(-ENOMEM);
4a47d3a1ff10e5 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  595  	}
4a47d3a1ff10e5 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  596  
aef0e1e9298ab6 drivers/vhost/scsi.c      Mike Christie      2020-09-21  597  	cmd = &svq->scsi_cmds[tag];
3aee26b4ae9104 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  598  	sg = cmd->tvc_sgl;
b1935f687bb93b drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  599  	prot_sg = cmd->tvc_prot_sgl;
3aee26b4ae9104 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  600  	pages = cmd->tvc_upages;
473f0b15a4c97d drivers/vhost/scsi.c      Markus Elfring     2017-05-20  601  	memset(cmd, 0, sizeof(*cmd));
3aee26b4ae9104 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  602  	cmd->tvc_sgl = sg;
b1935f687bb93b drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  603  	cmd->tvc_prot_sgl = prot_sg;
3aee26b4ae9104 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  604  	cmd->tvc_upages = pages;
4824d3bfb9097a drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-07  605  	cmd->tvc_se_cmd.map_tag = tag;
10e9cbb6b53111 drivers/vhost/scsi.c      Matthew Wilcox     2018-06-12 @606  	cmd->tvc_se_cmd.map_cpu = cpu;
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  607  	cmd->tvc_tag = scsi_tag;
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  608  	cmd->tvc_lun = lun;
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  609  	cmd->tvc_task_attr = task_attr;
3c63f66a0dcdd6 drivers/vhost/scsi.c      Asias He           2013-05-06  610  	cmd->tvc_exp_data_len = exp_data_len;
3c63f66a0dcdd6 drivers/vhost/scsi.c      Asias He           2013-05-06  611  	cmd->tvc_data_direction = data_direction;
3c63f66a0dcdd6 drivers/vhost/scsi.c      Asias He           2013-05-06  612  	cmd->tvc_nexus = tv_nexus;
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  613  	cmd->inflight = vhost_scsi_get_inflight(vq);
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  614  
1a1ff8256af679 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  615  	memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE);
95e7c4341b8e28 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  616  
3c63f66a0dcdd6 drivers/vhost/scsi.c      Asias He           2013-05-06  617  	return cmd;
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  618  }
057cbf49a1f082 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  619  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26015 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling
  2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
@ 2020-09-22  1:58   ` Jason Wang
  0 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2020-09-22  1:58 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	pbonzini, stefanha, virtualization


On 2020/9/22 上午2:23, Mike Christie wrote:
> We must free the vqs array in the open failure path, because
> vhost_vdpa_release will not be called.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>   drivers/vhost/vdpa.c | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 3fab94f..3301214 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -808,6 +808,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
>   
>   err_init_iotlb:
>   	vhost_dev_cleanup(&v->vdev);
> +	kfree(vqs);
>   err:
>   	atomic_dec(&v->opened);
>   	return r;


Acked-by: Jason Wang <jasowang@redhat.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/8] vhost: remove work arg from vhost_work_flush
  2020-09-21 18:23 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Mike Christie
@ 2020-09-22  2:01   ` Jason Wang
  0 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2020-09-22  2:01 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	pbonzini, stefanha, virtualization


On 2020/9/22 上午2:23, Mike Christie wrote:
> vhost_work_flush doesn't do anything with the work arg. This patch drops
> it and then renames vhost_work_flush to vhost_work_dev_flush to reflect
> that the function flushes all the works in the dev and not just a
> specific queue or work item.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/vhost/scsi.c  | 4 ++--
>   drivers/vhost/vhost.c | 8 ++++----
>   drivers/vhost/vhost.h | 2 +-
>   drivers/vhost/vsock.c | 2 +-
>   4 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 8791db8..5833059 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1469,8 +1469,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>   	/* Flush both the vhost poll and vhost work */
>   	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
>   		vhost_scsi_flush_vq(vs, i);
> -	vhost_work_flush(&vs->dev, &vs->vs_completion_work);
> -	vhost_work_flush(&vs->dev, &vs->vs_event_work);
> +	vhost_work_dev_flush(&vs->dev);
> +	vhost_work_dev_flush(&vs->dev);
>   
>   	/* Wait for all reqs issued before the flush to be finished */
>   	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 5dd9eb1..f83674e 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll)
>   }
>   EXPORT_SYMBOL_GPL(vhost_poll_stop);
>   
> -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
> +void vhost_work_dev_flush(struct vhost_dev *dev)
>   {
>   	struct vhost_flush_struct flush;
>   
> @@ -243,13 +243,13 @@ void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work)
>   		wait_for_completion(&flush.wait_event);
>   	}
>   }
> -EXPORT_SYMBOL_GPL(vhost_work_flush);
> +EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
>   
>   /* Flush any work that has been scheduled. When calling this, don't hold any
>    * locks that are also used by the callback. */
>   void vhost_poll_flush(struct vhost_poll *poll)
>   {
> -	vhost_work_flush(poll->dev, &poll->work);
> +	vhost_work_dev_flush(poll->dev);
>   }
>   EXPORT_SYMBOL_GPL(vhost_poll_flush);
>   
> @@ -542,7 +542,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev)
>   	attach.owner = current;
>   	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
>   	vhost_work_queue(dev, &attach.work);
> -	vhost_work_flush(dev, &attach.work);
> +	vhost_work_dev_flush(dev);
>   	return attach.ret;
>   }
>   
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 3d30b3d..b91efb5 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -46,7 +46,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
>   void vhost_poll_stop(struct vhost_poll *poll);
>   void vhost_poll_flush(struct vhost_poll *poll);
>   void vhost_poll_queue(struct vhost_poll *poll);
> -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
> +void vhost_work_dev_flush(struct vhost_dev *dev);
>   long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp);
>   
>   struct vhost_log {
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index a483cec..f40205f 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -652,7 +652,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock)
>   	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++)
>   		if (vsock->vqs[i].handle_kick)
>   			vhost_poll_flush(&vsock->vqs[i].poll);
> -	vhost_work_flush(&vsock->dev, &vsock->send_pkt_work);
> +	vhost_work_dev_flush(&vsock->dev);
>   }
>   
>   static void vhost_vsock_reset_orphans(struct sock *sk)


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
@ 2020-09-22  2:02   ` Jason Wang
  2020-09-23 19:12     ` Mike Christie
  2020-09-22  2:45   ` Bart Van Assche
  1 sibling, 1 reply; 23+ messages in thread
From: Jason Wang @ 2020-09-22  2:02 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	pbonzini, stefanha, virtualization


On 2020/9/22 上午2:23, Mike Christie wrote:
> This adds a helper check if a vq has been setup. The next patches
> will use this when we move the vhost scsi cmd preallocation from per
> session to per vq. In the per vq case, we only want to allocate cmds
> for vqs that have actually been setup and not for all the possible
> vqs.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>   drivers/vhost/vhost.c | 9 +++++++++
>   drivers/vhost/vhost.h | 1 +
>   2 files changed, 10 insertions(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index b45519c..5dd9eb1 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
>   	spin_lock_init(&call_ctx->ctx_lock);
>   }
>   
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
> +{
> +	if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
> +		return true;
> +	else
> +		return false;
> +}
> +EXPORT_SYMBOL_GPL(vhost_vq_is_setup);


This is probably ok but I wonder maybe we should have something like 
what vDPA did (VHOST_SET_VRING_ENABLE) to match virtio 1.0 device 
definition.

Thanks


> +
>   static void vhost_vq_reset(struct vhost_dev *dev,
>   			   struct vhost_virtqueue *vq)
>   {
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 9032d3c..3d30b3d 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -190,6 +190,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *,
>   		      struct vhost_log *log, unsigned int *log_num);
>   void vhost_discard_vq_desc(struct vhost_virtqueue *, int n);
>   
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq);
>   int vhost_vq_init_access(struct vhost_virtqueue *);
>   int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len);
>   int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *heads,


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
  2020-09-22  2:02   ` Jason Wang
@ 2020-09-22  2:45   ` Bart Van Assche
  2020-09-22 15:34     ` Michael Christie
  1 sibling, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2020-09-22  2:45 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	jasowang, pbonzini, stefanha, virtualization

On 2020-09-21 11:23, Mike Christie wrote:
> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
> +{
> +	if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
> +		return true;
> +	else
> +		return false;
> +}

Has it been considered changing the body of this function into
"return vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq)"? I'm
concerned otherwise one or another build bot will suggest to make that
change.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/8] vhost scsi: fix cmd completion race
  2020-09-21 18:23 ` [PATCH 4/8] vhost scsi: fix cmd completion race Mike Christie
@ 2020-09-22  2:48   ` Bart Van Assche
  2020-09-22  2:51     ` Bart Van Assche
  0 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2020-09-22  2:48 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	jasowang, pbonzini, stefanha, virtualization

On 2020-09-21 11:23, Mike Christie wrote:
> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
> If the last put happens a little later then we could race where
> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
> 
> This patch has us delay completing the cmd until the last lio core ref
> is dropped. We then know that once we signal to the guest that the cmd
> is completed that if it queues a new command it will find a free cmd.

It seems weird to me to see a reference to LIO in the description of a
vhost patch? Since this driver supports more backends than LIO, shouldn't
the patch description be made more generic?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/8] vhost scsi: fix cmd completion race
  2020-09-22  2:48   ` Bart Van Assche
@ 2020-09-22  2:51     ` Bart Van Assche
  0 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2020-09-22  2:51 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	jasowang, pbonzini, stefanha, virtualization

On 2020-09-21 19:48, Bart Van Assche wrote:
> On 2020-09-21 11:23, Mike Christie wrote:
>> We might not do the final se_cmd put from vhost_scsi_complete_cmd_work.
>> If the last put happens a little later then we could race where
>> vhost_scsi_complete_cmd_work does vhost_signal, the guest runs and sends
>> more IO, and vhost_scsi_handle_vq runs but does not find any free cmds.
>>
>> This patch has us delay completing the cmd until the last lio core ref
>> is dropped. We then know that once we signal to the guest that the cmd
>> is completed that if it queues a new command it will find a free cmd.
> 
> It seems weird to me to see a reference to LIO in the description of a
> vhost patch? Since this driver supports more backends than LIO, shouldn't
> the patch description be made more generic?

Please ignore the above comment.

Bart.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-22  2:45   ` Bart Van Assche
@ 2020-09-22 15:34     ` Michael Christie
  0 siblings, 0 replies; 23+ messages in thread
From: Michael Christie @ 2020-09-22 15:34 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Martin K. Petersen, linux-scsi, target-devel, mst, jasowang,
	pbonzini, stefanha, virtualization



> On Sep 21, 2020, at 9:45 PM, Bart Van Assche <bvanassche@acm.org> wrote:
> 
> On 2020-09-21 11:23, Mike Christie wrote:
>> +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
>> +{
>> +	if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
>> +		return true;
>> +	else
>> +		return false;
>> +}
> 
> Has it been considered changing the body of this function into
> "return vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq)"? I'm

If we do not go the VHOST_SET_VRING_ENABLE route, then I'll do what you suggest.


> concerned otherwise one or another build bot will suggest to make that
> change.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
  2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
  2020-09-22  1:32   ` kernel test robot
@ 2020-09-23  7:53   ` Dan Carpenter
  2020-09-24  6:22   ` Michael S. Tsirkin
  2 siblings, 0 replies; 23+ messages in thread
From: Dan Carpenter @ 2020-09-23  7:53 UTC (permalink / raw)
  To: kbuild, Mike Christie, martin.petersen, linux-scsi, target-devel,
	mst, jasowang, pbonzini, stefanha, virtualization
  Cc: lkp, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6060 bytes --]

Hi Mike,

url:    https://github.com/0day-ci/linux/commits/Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: i386-randconfig-m021-20200923 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
drivers/vhost/scsi.c:606 vhost_scsi_get_cmd() error: uninitialized symbol 'cpu'.

# https://github.com/0day-ci/linux/commit/aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Mike-Christie/vhost-scsi-fixes-and-cleanups/20200922-031251
git checkout aef0e1e9298ab68f2d7bdf1afb9a376641b993d5
vim +/cpu +606 drivers/vhost/scsi.c

1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  572  static struct vhost_scsi_cmd *
aef0e1e9298ab68f drivers/vhost/scsi.c      Mike Christie      2020-09-21  573  vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  574  		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  575  		   u32 exp_data_len, int data_direction)
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  576  {
aef0e1e9298ab68f drivers/vhost/scsi.c      Mike Christie      2020-09-21  577  	struct vhost_scsi_virtqueue *svq = container_of(vq,
aef0e1e9298ab68f drivers/vhost/scsi.c      Mike Christie      2020-09-21  578  					struct vhost_scsi_virtqueue, vq);
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  579  	struct vhost_scsi_cmd *cmd;
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  580  	struct vhost_scsi_nexus *tv_nexus;
b1935f687bb93b20 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  581  	struct scatterlist *sg, *prot_sg;
3aee26b4ae91048c drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  582  	struct page **pages;
10e9cbb6b531117b drivers/vhost/scsi.c      Matthew Wilcox     2018-06-12  583  	int tag, cpu;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  584  
9871831283e79575 drivers/vhost/scsi.c      Asias He           2013-05-06  585  	tv_nexus = tpg->tpg_nexus;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  586  	if (!tv_nexus) {
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  587  		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  588  		return ERR_PTR(-EIO);
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  589  	}
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  590  
aef0e1e9298ab68f drivers/vhost/scsi.c      Mike Christie      2020-09-21  591  	tag = sbitmap_get(&svq->scsi_tags, 0, false);
4a47d3a1ff10e564 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  592  	if (tag < 0) {
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  593  		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
4a47d3a1ff10e564 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  594  		return ERR_PTR(-ENOMEM);
4a47d3a1ff10e564 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  595  	}
4a47d3a1ff10e564 drivers/vhost/scsi.c      Nicholas Bellinger 2013-09-23  596  
aef0e1e9298ab68f drivers/vhost/scsi.c      Mike Christie      2020-09-21  597  	cmd = &svq->scsi_cmds[tag];
3aee26b4ae91048c drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  598  	sg = cmd->tvc_sgl;
b1935f687bb93b20 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  599  	prot_sg = cmd->tvc_prot_sgl;
3aee26b4ae91048c drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  600  	pages = cmd->tvc_upages;
473f0b15a4c97d39 drivers/vhost/scsi.c      Markus Elfring     2017-05-20  601  	memset(cmd, 0, sizeof(*cmd));
3aee26b4ae91048c drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  602  	cmd->tvc_sgl = sg;
b1935f687bb93b20 drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  603  	cmd->tvc_prot_sgl = prot_sg;
3aee26b4ae91048c drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-21  604  	cmd->tvc_upages = pages;
4824d3bfb9097ac5 drivers/vhost/scsi.c      Nicholas Bellinger 2013-06-07  605  	cmd->tvc_se_cmd.map_tag = tag;
10e9cbb6b531117b drivers/vhost/scsi.c      Matthew Wilcox     2018-06-12 @606  	cmd->tvc_se_cmd.map_cpu = cpu;

"cpu" is never initialized.

95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  607  	cmd->tvc_tag = scsi_tag;
95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  608  	cmd->tvc_lun = lun;
95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  609  	cmd->tvc_task_attr = task_attr;
3c63f66a0dcdd6cb drivers/vhost/scsi.c      Asias He           2013-05-06  610  	cmd->tvc_exp_data_len = exp_data_len;
3c63f66a0dcdd6cb drivers/vhost/scsi.c      Asias He           2013-05-06  611  	cmd->tvc_data_direction = data_direction;
3c63f66a0dcdd6cb drivers/vhost/scsi.c      Asias He           2013-05-06  612  	cmd->tvc_nexus = tv_nexus;
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  613  	cmd->inflight = vhost_scsi_get_inflight(vq);
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  614  
1a1ff8256af679c8 drivers/vhost/scsi.c      Nicholas Bellinger 2015-01-31  615  	memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE);
95e7c4341b8e28da drivers/vhost/scsi.c      Nicholas Bellinger 2014-02-22  616  
3c63f66a0dcdd6cb drivers/vhost/scsi.c      Asias He           2013-05-06  617  	return cmd;
057cbf49a1f08297 drivers/vhost/tcm_vhost.c Nicholas Bellinger 2012-07-18  618  }

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40631 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/8] vhost scsi: add lun parser helper
  2020-09-21 18:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Mike Christie
@ 2020-09-23 10:23   ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2020-09-23 10:23 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	jasowang, stefanha, virtualization

On 21/09/20 20:23, Mike Christie wrote:
> Move code to parse lun from req's lun_buf to helper, so tmf code
> can use it in the next patch.
> 
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>  drivers/vhost/scsi.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index 26d0f75..736ce19 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -899,6 +899,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
>  	return ret;
>  }
>  
> +static u16 vhost_buf_to_lun(u8 *lun_buf)
> +{
> +	return ((lun_buf[2] << 8) | lun_buf[3]) & 0x3FFF;
> +}
> +
>  static void
>  vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
>  {
> @@ -1037,12 +1042,12 @@ static void vhost_scsi_submission_work(struct work_struct *work)
>  			tag = vhost64_to_cpu(vq, v_req_pi.tag);
>  			task_attr = v_req_pi.task_attr;
>  			cdb = &v_req_pi.cdb[0];
> -			lun = ((v_req_pi.lun[2] << 8) | v_req_pi.lun[3]) & 0x3FFF;
> +			lun = vhost_buf_to_lun(v_req_pi.lun);
>  		} else {
>  			tag = vhost64_to_cpu(vq, v_req.tag);
>  			task_attr = v_req.task_attr;
>  			cdb = &v_req.cdb[0];
> -			lun = ((v_req.lun[2] << 8) | v_req.lun[3]) & 0x3FFF;
> +			lun = vhost_buf_to_lun(v_req.lun);
>  		}
>  		/*
>  		 * Check that the received CDB size does not exceeded our
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-22  2:02   ` Jason Wang
@ 2020-09-23 19:12     ` Mike Christie
  2020-09-24  7:22       ` Jason Wang
  0 siblings, 1 reply; 23+ messages in thread
From: Mike Christie @ 2020-09-23 19:12 UTC (permalink / raw)
  To: Jason Wang, martin.petersen, linux-scsi, target-devel, mst,
	pbonzini, stefanha, virtualization

On 9/21/20 9:02 PM, Jason Wang wrote:
> 
> On 2020/9/22 上午2:23, Mike Christie wrote:
>> This adds a helper check if a vq has been setup. The next patches
>> will use this when we move the vhost scsi cmd preallocation from per
>> session to per vq. In the per vq case, we only want to allocate cmds
>> for vqs that have actually been setup and not for all the possible
>> vqs.
>>
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>> ---
>>   drivers/vhost/vhost.c | 9 +++++++++
>>   drivers/vhost/vhost.h | 1 +
>>   2 files changed, 10 insertions(+)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index b45519c..5dd9eb1 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
>>       spin_lock_init(&call_ctx->ctx_lock);
>>   }
>>   +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
>> +{
>> +    if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
>> +        return true;
>> +    else
>> +        return false;
>> +}
>> +EXPORT_SYMBOL_GPL(vhost_vq_is_setup);
> 
> 
> This is probably ok but I wonder maybe we should have something like what vDPA did (VHOST_SET_VRING_ENABLE) to match virtio 1.0 device definition.

It looks like I can make that work. Some questions:

1. Do you mean a generic VHOST_SET_VRING_ENABLE or a SCSI specific one VHOST_SCSI_SET_VRING_ENABLE?

2. I can see the VHOST_VDPA_SET_VRING_ENABLE kernel code and the vhost_set_vring_enable qemu code, so I have an idea of how it should work for vhost scsi. However, I'm not sure the requirements for a generic VHOST_SET_VRING_ENABLE if that is what you meant. I could not find it in the spec either. Could you send me a pointer to the section?

For example, for vhost-net we seem to enable a device in the VHOST_NET_SET_BACKEND ioctl, so I'm not sure what behavior should be or needs to be implemented for net and vsock.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
  2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
  2020-09-22  1:32   ` kernel test robot
  2020-09-23  7:53   ` Dan Carpenter
@ 2020-09-24  6:22   ` Michael S. Tsirkin
  2020-09-24 15:31     ` Michael Christie
  2 siblings, 1 reply; 23+ messages in thread
From: Michael S. Tsirkin @ 2020-09-24  6:22 UTC (permalink / raw)
  To: Mike Christie
  Cc: martin.petersen, linux-scsi, target-devel, jasowang, pbonzini,
	stefanha, virtualization

On Mon, Sep 21, 2020 at 01:23:03PM -0500, Mike Christie wrote:
> We currently are limited to 256 cmds per session. This leads to problems
> where if the user has increased virtqueue_size to more than 2 or
> cmd_per_lun to more than 256 vhost_scsi_get_tag can fail and the guest
> will get IO errors.
> 
> This patch moves the cmd allocation to per vq so we can easily match
> whatever the user has specified for num_queues and
> virtqueue_size/cmd_per_lun. It also makes it easier to control how much
> memory we preallocate. For cases, where perf is not as important and
> we can use the current defaults (1 vq and 128 cmds per vq) memory use
> from preallocate cmds is cut in half. For cases, where we are willing
> to use more memory for higher perf, cmd mem use will now increase as
> the num queues and queue depth increases.
> 
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>  drivers/vhost/scsi.c | 204 ++++++++++++++++++++++++++++++++-------------------
>  1 file changed, 127 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index b22adf0..13311b8 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -52,7 +52,6 @@
>  #define VHOST_SCSI_VERSION  "v0.1"
>  #define VHOST_SCSI_NAMELEN 256
>  #define VHOST_SCSI_MAX_CDB_SIZE 32
> -#define VHOST_SCSI_DEFAULT_TAGS 256
>  #define VHOST_SCSI_PREALLOC_SGLS 2048
>  #define VHOST_SCSI_PREALLOC_UPAGES 2048
>  #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048
> @@ -189,6 +188,9 @@ struct vhost_scsi_virtqueue {
>  	 * Writers must also take dev mutex and flush under it.
>  	 */
>  	int inflight_idx;
> +	struct vhost_scsi_cmd *scsi_cmds;
> +	struct sbitmap scsi_tags;
> +	int max_cmds;
>  };
>  
>  struct vhost_scsi {
> @@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>  {
>  	struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
>  				struct vhost_scsi_cmd, tvc_se_cmd);
> -	struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess;
> +	struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq,
> +				struct vhost_scsi_virtqueue, vq);
> +	struct vhost_scsi_inflight *inflight = tv_cmd->inflight;
>  	int i;
>  
>  	if (tv_cmd->tvc_sgl_count) {
> @@ -336,8 +340,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>  			put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
>  	}
>  
> -	vhost_scsi_put_inflight(tv_cmd->inflight);
> -	target_free_tag(se_sess, se_cmd);
> +	sbitmap_clear_bit(&svq->scsi_tags, se_cmd->map_tag);
> +	vhost_scsi_put_inflight(inflight);
>  }
>  
>  static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
> @@ -566,13 +570,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>  }
>  
>  static struct vhost_scsi_cmd *
> -vhost_scsi_get_tag(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
> +vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
>  		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
>  		   u32 exp_data_len, int data_direction)
>  {
> +	struct vhost_scsi_virtqueue *svq = container_of(vq,
> +					struct vhost_scsi_virtqueue, vq);
>  	struct vhost_scsi_cmd *cmd;
>  	struct vhost_scsi_nexus *tv_nexus;
> -	struct se_session *se_sess;
>  	struct scatterlist *sg, *prot_sg;
>  	struct page **pages;
>  	int tag, cpu;
> @@ -582,15 +587,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>  		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
>  		return ERR_PTR(-EIO);
>  	}
> -	se_sess = tv_nexus->tvn_se_sess;
>  
> -	tag = sbitmap_queue_get(&se_sess->sess_tag_pool, &cpu);
> +	tag = sbitmap_get(&svq->scsi_tags, 0, false);
>  	if (tag < 0) {
>  		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
>  		return ERR_PTR(-ENOMEM);
>  	}


After this change, cpu is uninitialized.


>  
> -	cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[tag];
> +	cmd = &svq->scsi_cmds[tag];
>  	sg = cmd->tvc_sgl;
>  	prot_sg = cmd->tvc_prot_sgl;
>  	pages = cmd->tvc_upages;
> @@ -1065,11 +1069,11 @@ static void vhost_scsi_submission_work(struct work_struct *work)
>  				scsi_command_size(cdb), VHOST_SCSI_MAX_CDB_SIZE);
>  				goto err;
>  		}
> -		cmd = vhost_scsi_get_tag(vq, tpg, cdb, tag, lun, task_attr,
> +		cmd = vhost_scsi_get_cmd(vq, tpg, cdb, tag, lun, task_attr,
>  					 exp_data_len + prot_bytes,
>  					 data_direction);
>  		if (IS_ERR(cmd)) {
> -			vq_err(vq, "vhost_scsi_get_tag failed %ld\n",
> +			vq_err(vq, "vhost_scsi_get_cmd failed %ld\n",
>  			       PTR_ERR(cmd));
>  			goto err;
>  		}
> @@ -1373,6 +1377,83 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  		wait_for_completion(&old_inflight[i]->comp);
>  }
>  
> +static void vhost_scsi_destroy_vq_cmds(struct vhost_virtqueue *vq)
> +{
> +	struct vhost_scsi_virtqueue *svq = container_of(vq,
> +					struct vhost_scsi_virtqueue, vq);
> +	struct vhost_scsi_cmd *tv_cmd;
> +	unsigned int i;
> +
> +	if (!svq->scsi_cmds)
> +		return;
> +
> +	for (i = 0; i < svq->max_cmds; i++) {
> +		tv_cmd = &svq->scsi_cmds[i];
> +
> +		kfree(tv_cmd->tvc_sgl);
> +		kfree(tv_cmd->tvc_prot_sgl);
> +		kfree(tv_cmd->tvc_upages);
> +	}
> +
> +	sbitmap_free(&svq->scsi_tags);
> +	kfree(svq->scsi_cmds);
> +	svq->scsi_cmds = NULL;
> +}
> +
> +static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds)
> +{
> +	struct vhost_scsi_virtqueue *svq = container_of(vq,
> +					struct vhost_scsi_virtqueue, vq);
> +	struct vhost_scsi_cmd *tv_cmd;
> +	unsigned int i;
> +
> +	if (svq->scsi_cmds)
> +		return 0;
> +
> +	if (sbitmap_init_node(&svq->scsi_tags, max_cmds, -1, GFP_KERNEL,
> +			      NUMA_NO_NODE))
> +		return -ENOMEM;
> +	svq->max_cmds = max_cmds;
> +
> +	svq->scsi_cmds = kcalloc(max_cmds, sizeof(*tv_cmd), GFP_KERNEL);
> +	if (!svq->scsi_cmds) {
> +		sbitmap_free(&svq->scsi_tags);
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < max_cmds; i++) {
> +		tv_cmd = &svq->scsi_cmds[i];
> +
> +		tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
> +					  sizeof(struct scatterlist),
> +					  GFP_KERNEL);
> +		if (!tv_cmd->tvc_sgl) {
> +			pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
> +			goto out;
> +		}
> +
> +		tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
> +					     sizeof(struct page *),
> +					     GFP_KERNEL);
> +		if (!tv_cmd->tvc_upages) {
> +			pr_err("Unable to allocate tv_cmd->tvc_upages\n");
> +			goto out;
> +		}
> +
> +		tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
> +					       sizeof(struct scatterlist),
> +					       GFP_KERNEL);
> +		if (!tv_cmd->tvc_prot_sgl) {
> +			pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
> +			goto out;
> +		}
> +	}
> +	return 0;
> +out:
> +	vhost_scsi_destroy_vq_cmds(vq);
> +	return -ENOMEM;
> +}
> +
>  /*
>   * Called from vhost_scsi_ioctl() context to walk the list of available
>   * vhost_scsi_tpg with an active struct vhost_scsi_nexus
> @@ -1427,10 +1508,9 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  
>  		if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
>  			if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
> -				kfree(vs_tpg);
>  				mutex_unlock(&tpg->tv_tpg_mutex);
>  				ret = -EEXIST;
> -				goto out;
> +				goto undepend;
>  			}
>  			/*
>  			 * In order to ensure individual vhost-scsi configfs
> @@ -1442,9 +1522,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  			ret = target_depend_item(&se_tpg->tpg_group.cg_item);
>  			if (ret) {
>  				pr_warn("target_depend_item() failed: %d\n", ret);
> -				kfree(vs_tpg);
>  				mutex_unlock(&tpg->tv_tpg_mutex);
> -				goto out;
> +				goto undepend;
>  			}
>  			tpg->tv_tpg_vhost_count++;
>  			tpg->vhost_scsi = vs;
> @@ -1457,6 +1536,16 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  	if (match) {
>  		memcpy(vs->vs_vhost_wwpn, t->vhost_wwpn,
>  		       sizeof(vs->vs_vhost_wwpn));
> +
> +		for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) {
> +			vq = &vs->vqs[i].vq;
> +			if (!vhost_vq_is_setup(vq))
> +				continue;
> +
> +			if (vhost_scsi_setup_vq_cmds(vq, vq->num))
> +				goto destroy_vq_cmds;
> +		}
> +
>  		for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
>  			vq = &vs->vqs[i].vq;
>  			mutex_lock(&vq->mutex);
> @@ -1476,7 +1565,22 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  	vhost_scsi_flush(vs);
>  	kfree(vs->vs_tpg);
>  	vs->vs_tpg = vs_tpg;
> +	goto out;
>  
> +destroy_vq_cmds:
> +	for (i--; i >= VHOST_SCSI_VQ_IO; i--) {
> +		if (!vhost_vq_get_backend(&vs->vqs[i].vq))
> +			vhost_scsi_destroy_vq_cmds(&vs->vqs[i].vq);
> +	}
> +undepend:
> +	for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
> +		tpg = vs_tpg[i];
> +		if (tpg) {
> +			tpg->tv_tpg_vhost_count--;
> +			target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
> +		}
> +	}
> +	kfree(vs_tpg);
>  out:
>  	mutex_unlock(&vs->dev.mutex);
>  	mutex_unlock(&vhost_scsi_mutex);
> @@ -1549,6 +1653,12 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
>  			mutex_lock(&vq->mutex);
>  			vhost_vq_set_backend(vq, NULL);
>  			mutex_unlock(&vq->mutex);
> +			/*
> +			 * Make sure cmds are not running before tearing them
> +			 * down.
> +			 */
> +			vhost_scsi_flush(vs);
> +			vhost_scsi_destroy_vq_cmds(vq);
>  		}
>  	}
>  	/*
> @@ -1842,23 +1952,6 @@ static void vhost_scsi_port_unlink(struct se_portal_group *se_tpg,
>  	mutex_unlock(&vhost_scsi_mutex);
>  }
>  
> -static void vhost_scsi_free_cmd_map_res(struct se_session *se_sess)
> -{
> -	struct vhost_scsi_cmd *tv_cmd;
> -	unsigned int i;
> -
> -	if (!se_sess->sess_cmd_map)
> -		return;
> -
> -	for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
> -		tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
> -
> -		kfree(tv_cmd->tvc_sgl);
> -		kfree(tv_cmd->tvc_prot_sgl);
> -		kfree(tv_cmd->tvc_upages);
> -	}
> -}
> -
>  static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_store(
>  		struct config_item *item, const char *page, size_t count)
>  {
> @@ -1898,45 +1991,6 @@ static ssize_t vhost_scsi_tpg_attrib_fabric_prot_type_show(
>  	NULL,
>  };
>  
> -static int vhost_scsi_nexus_cb(struct se_portal_group *se_tpg,
> -			       struct se_session *se_sess, void *p)
> -{
> -	struct vhost_scsi_cmd *tv_cmd;
> -	unsigned int i;
> -
> -	for (i = 0; i < VHOST_SCSI_DEFAULT_TAGS; i++) {
> -		tv_cmd = &((struct vhost_scsi_cmd *)se_sess->sess_cmd_map)[i];
> -
> -		tv_cmd->tvc_sgl = kcalloc(VHOST_SCSI_PREALLOC_SGLS,
> -					  sizeof(struct scatterlist),
> -					  GFP_KERNEL);
> -		if (!tv_cmd->tvc_sgl) {
> -			pr_err("Unable to allocate tv_cmd->tvc_sgl\n");
> -			goto out;
> -		}
> -
> -		tv_cmd->tvc_upages = kcalloc(VHOST_SCSI_PREALLOC_UPAGES,
> -					     sizeof(struct page *),
> -					     GFP_KERNEL);
> -		if (!tv_cmd->tvc_upages) {
> -			pr_err("Unable to allocate tv_cmd->tvc_upages\n");
> -			goto out;
> -		}
> -
> -		tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS,
> -					       sizeof(struct scatterlist),
> -					       GFP_KERNEL);
> -		if (!tv_cmd->tvc_prot_sgl) {
> -			pr_err("Unable to allocate tv_cmd->tvc_prot_sgl\n");
> -			goto out;
> -		}
> -	}
> -	return 0;
> -out:
> -	vhost_scsi_free_cmd_map_res(se_sess);
> -	return -ENOMEM;
> -}
> -
>  static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
>  				const char *name)
>  {
> @@ -1960,12 +2014,9 @@ static int vhost_scsi_make_nexus(struct vhost_scsi_tpg *tpg,
>  	 * struct se_node_acl for the vhost_scsi struct se_portal_group with
>  	 * the SCSI Initiator port name of the passed configfs group 'name'.
>  	 */
> -	tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg,
> -					VHOST_SCSI_DEFAULT_TAGS,
> -					sizeof(struct vhost_scsi_cmd),
> +	tv_nexus->tvn_se_sess = target_setup_session(&tpg->se_tpg, 0, 0,
>  					TARGET_PROT_DIN_PASS | TARGET_PROT_DOUT_PASS,
> -					(unsigned char *)name, tv_nexus,
> -					vhost_scsi_nexus_cb);
> +					(unsigned char *)name, tv_nexus, NULL);
>  	if (IS_ERR(tv_nexus->tvn_se_sess)) {
>  		mutex_unlock(&tpg->tv_tpg_mutex);
>  		kfree(tv_nexus);
> @@ -2015,7 +2066,6 @@ static int vhost_scsi_drop_nexus(struct vhost_scsi_tpg *tpg)
>  		" %s Initiator Port: %s\n", vhost_scsi_dump_proto_id(tpg->tport),
>  		tv_nexus->tvn_se_sess->se_node_acl->initiatorname);
>  
> -	vhost_scsi_free_cmd_map_res(se_sess);
>  	/*
>  	 * Release the SCSI I_T Nexus to the emulated vhost Target Port
>  	 */
> -- 
> 1.8.3.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/8] vhost: add helper to check if a vq has been setup
  2020-09-23 19:12     ` Mike Christie
@ 2020-09-24  7:22       ` Jason Wang
  0 siblings, 0 replies; 23+ messages in thread
From: Jason Wang @ 2020-09-24  7:22 UTC (permalink / raw)
  To: Mike Christie, martin.petersen, linux-scsi, target-devel, mst,
	pbonzini, stefanha, virtualization


On 2020/9/24 上午3:12, Mike Christie wrote:
> On 9/21/20 9:02 PM, Jason Wang wrote:
>> On 2020/9/22 上午2:23, Mike Christie wrote:
>>> This adds a helper check if a vq has been setup. The next patches
>>> will use this when we move the vhost scsi cmd preallocation from per
>>> session to per vq. In the per vq case, we only want to allocate cmds
>>> for vqs that have actually been setup and not for all the possible
>>> vqs.
>>>
>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>> ---
>>>    drivers/vhost/vhost.c | 9 +++++++++
>>>    drivers/vhost/vhost.h | 1 +
>>>    2 files changed, 10 insertions(+)
>>>
>>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>>> index b45519c..5dd9eb1 100644
>>> --- a/drivers/vhost/vhost.c
>>> +++ b/drivers/vhost/vhost.c
>>> @@ -305,6 +305,15 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
>>>        spin_lock_init(&call_ctx->ctx_lock);
>>>    }
>>>    +bool vhost_vq_is_setup(struct vhost_virtqueue *vq)
>>> +{
>>> +    if (vq->avail && vq->desc && vq->used && vhost_vq_access_ok(vq))
>>> +        return true;
>>> +    else
>>> +        return false;
>>> +}
>>> +EXPORT_SYMBOL_GPL(vhost_vq_is_setup);
>>
>> This is probably ok but I wonder maybe we should have something like what vDPA did (VHOST_SET_VRING_ENABLE) to match virtio 1.0 device definition.
> It looks like I can make that work. Some questions:
>
> 1. Do you mean a generic VHOST_SET_VRING_ENABLE or a SCSI specific one VHOST_SCSI_SET_VRING_ENABLE?


It would be better if we can make it generic.


>
> 2. I can see the VHOST_VDPA_SET_VRING_ENABLE kernel code and the vhost_set_vring_enable qemu code, so I have an idea of how it should work for vhost scsi. However, I'm not sure the requirements for a generic VHOST_SET_VRING_ENABLE if that is what you meant. I could not find it in the spec either. Could you send me a pointer to the section?


In the spec, for PCI, it's the queue_enable for modern device.


>
> For example, for vhost-net we seem to enable a device in the VHOST_NET_SET_BACKEND ioctl, so I'm not sure what behavior should be or needs to be implemented for net and vsock.


Yes, but VHOST_NET_SET_BACKEND is for the whole device not a specific 
virtqueue.


Thanks


>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
  2020-09-24  6:22   ` Michael S. Tsirkin
@ 2020-09-24 15:31     ` Michael Christie
  0 siblings, 0 replies; 23+ messages in thread
From: Michael Christie @ 2020-09-24 15:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Martin K. Petersen, linux-scsi, target-devel, jasowang, pbonzini,
	stefanha, virtualization



> On Sep 24, 2020, at 1:22 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> 
> On Mon, Sep 21, 2020 at 01:23:03PM -0500, Mike Christie wrote:
>> We currently are limited to 256 cmds per session. This leads to problems
>> where if the user has increased virtqueue_size to more than 2 or
>> cmd_per_lun to more than 256 vhost_scsi_get_tag can fail and the guest
>> will get IO errors.
>> 
>> This patch moves the cmd allocation to per vq so we can easily match
>> whatever the user has specified for num_queues and
>> virtqueue_size/cmd_per_lun. It also makes it easier to control how much
>> memory we preallocate. For cases, where perf is not as important and
>> we can use the current defaults (1 vq and 128 cmds per vq) memory use
>> from preallocate cmds is cut in half. For cases, where we are willing
>> to use more memory for higher perf, cmd mem use will now increase as
>> the num queues and queue depth increases.
>> 
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>> ---
>> drivers/vhost/scsi.c | 204 ++++++++++++++++++++++++++++++++-------------------
>> 1 file changed, 127 insertions(+), 77 deletions(-)
>> 
>> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
>> index b22adf0..13311b8 100644
>> --- a/drivers/vhost/scsi.c
>> +++ b/drivers/vhost/scsi.c
>> @@ -52,7 +52,6 @@
>> #define VHOST_SCSI_VERSION  "v0.1"
>> #define VHOST_SCSI_NAMELEN 256
>> #define VHOST_SCSI_MAX_CDB_SIZE 32
>> -#define VHOST_SCSI_DEFAULT_TAGS 256
>> #define VHOST_SCSI_PREALLOC_SGLS 2048
>> #define VHOST_SCSI_PREALLOC_UPAGES 2048
>> #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048
>> @@ -189,6 +188,9 @@ struct vhost_scsi_virtqueue {
>> 	 * Writers must also take dev mutex and flush under it.
>> 	 */
>> 	int inflight_idx;
>> +	struct vhost_scsi_cmd *scsi_cmds;
>> +	struct sbitmap scsi_tags;
>> +	int max_cmds;
>> };
>> 
>> struct vhost_scsi {
>> @@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>> {
>> 	struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
>> 				struct vhost_scsi_cmd, tvc_se_cmd);
>> -	struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess;
>> +	struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq,
>> +				struct vhost_scsi_virtqueue, vq);
>> +	struct vhost_scsi_inflight *inflight = tv_cmd->inflight;
>> 	int i;
>> 
>> 	if (tv_cmd->tvc_sgl_count) {
>> @@ -336,8 +340,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>> 			put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
>> 	}
>> 
>> -	vhost_scsi_put_inflight(tv_cmd->inflight);
>> -	target_free_tag(se_sess, se_cmd);
>> +	sbitmap_clear_bit(&svq->scsi_tags, se_cmd->map_tag);
>> +	vhost_scsi_put_inflight(inflight);
>> }
>> 
>> static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
>> @@ -566,13 +570,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>> }
>> 
>> static struct vhost_scsi_cmd *
>> -vhost_scsi_get_tag(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
>> +vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
>> 		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
>> 		   u32 exp_data_len, int data_direction)
>> {
>> +	struct vhost_scsi_virtqueue *svq = container_of(vq,
>> +					struct vhost_scsi_virtqueue, vq);
>> 	struct vhost_scsi_cmd *cmd;
>> 	struct vhost_scsi_nexus *tv_nexus;
>> -	struct se_session *se_sess;
>> 	struct scatterlist *sg, *prot_sg;
>> 	struct page **pages;
>> 	int tag, cpu;
>> @@ -582,15 +587,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>> 		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
>> 		return ERR_PTR(-EIO);
>> 	}
>> -	se_sess = tv_nexus->tvn_se_sess;
>> 
>> -	tag = sbitmap_queue_get(&se_sess->sess_tag_pool, &cpu);
>> +	tag = sbitmap_get(&svq->scsi_tags, 0, false);
>> 	if (tag < 0) {
>> 		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
>> 		return ERR_PTR(-ENOMEM);
>> 	}
> 
> 
> After this change, cpu is uninitialized.

I’ve fixed this.

We don’t use the cmd’s map_cpu field anymore, so I have deleted it and the cpu var above.

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-09-24 15:33 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
2020-09-22  1:58   ` Jason Wang
2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
2020-09-22  2:02   ` Jason Wang
2020-09-23 19:12     ` Mike Christie
2020-09-24  7:22       ` Jason Wang
2020-09-22  2:45   ` Bart Van Assche
2020-09-22 15:34     ` Michael Christie
2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
2020-09-22  1:32   ` kernel test robot
2020-09-23  7:53   ` Dan Carpenter
2020-09-24  6:22   ` Michael S. Tsirkin
2020-09-24 15:31     ` Michael Christie
2020-09-21 18:23 ` [PATCH 4/8] vhost scsi: fix cmd completion race Mike Christie
2020-09-22  2:48   ` Bart Van Assche
2020-09-22  2:51     ` Bart Van Assche
2020-09-21 18:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Mike Christie
2020-09-23 10:23   ` Paolo Bonzini
2020-09-21 18:23 ` [PATCH 6/8] vhost scsi: Add support for LUN resets Mike Christie
2020-09-21 18:23 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Mike Christie
2020-09-22  2:01   ` Jason Wang
2020-09-21 18:23 ` [PATCH 8/8] vhost scsi: remove extra flushes Mike Christie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).