All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com,
	mst@redhat.com, sgarzare@redhat.com,
	virtualization@lists.linux-foundation.org
Cc: Mike Christie <michael.christie@oracle.com>
Subject: [PATCH V4 09/12] vhost-scsi: flush IO vqs then send TMF rsp
Date: Thu,  4 Nov 2021 14:04:59 -0500	[thread overview]
Message-ID: <20211104190502.7053-10-michael.christie@oracle.com> (raw)
In-Reply-To: <20211104190502.7053-1-michael.christie@oracle.com>

With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.

With multiple workers, one of the IO vq threads could be run after the
TMF is queued, so this has us flush the IO vqs that don't share a woker
with the CTL vq (the vq that handles TMFs) before sending the TMF
response.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 08beba73ada4..29d9adcdb4fc 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1153,12 +1153,28 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
 {
 	struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf,
 						  vwork);
-	int resp_code;
+	struct vhost_virtqueue *ctl_vq, *vq;
+	int resp_code, i;
+
+	if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) {
+		/*
+		 * Flush IO vqs that don't share a worker with the ctl to make
+		 * sure they have sent their responses before us.
+		 */
+		ctl_vq = &tmf->vhost->vqs[VHOST_SCSI_VQ_CTL].vq;
+		for (i = VHOST_SCSI_VQ_IO; i < tmf->vhost->dev.nvqs; i++) {
+			vq = &tmf->vhost->vqs[i].vq;
+
+			if (vhost_vq_is_setup(vq) &&
+			    vq->worker != ctl_vq->worker) {
+				vhost_vq_work_flush(vq);
+			}
+		}
 
-	if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE)
 		resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED;
-	else
+	} else {
 		resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+	}
 
 	vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
 				 tmf->vq_desc, &tmf->resp_iov, resp_code);
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com>
To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com,
	mst@redhat.com, sgarzare@redhat.com,
	virtualization@lists.linux-foundation.org
Subject: [PATCH V4 09/12] vhost-scsi: flush IO vqs then send TMF rsp
Date: Thu,  4 Nov 2021 14:04:59 -0500	[thread overview]
Message-ID: <20211104190502.7053-10-michael.christie@oracle.com> (raw)
In-Reply-To: <20211104190502.7053-1-michael.christie@oracle.com>

With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.

With multiple workers, one of the IO vq threads could be run after the
TMF is queued, so this has us flush the IO vqs that don't share a woker
with the CTL vq (the vq that handles TMFs) before sending the TMF
response.

Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
 drivers/vhost/scsi.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 08beba73ada4..29d9adcdb4fc 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1153,12 +1153,28 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
 {
 	struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf,
 						  vwork);
-	int resp_code;
+	struct vhost_virtqueue *ctl_vq, *vq;
+	int resp_code, i;
+
+	if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) {
+		/*
+		 * Flush IO vqs that don't share a worker with the ctl to make
+		 * sure they have sent their responses before us.
+		 */
+		ctl_vq = &tmf->vhost->vqs[VHOST_SCSI_VQ_CTL].vq;
+		for (i = VHOST_SCSI_VQ_IO; i < tmf->vhost->dev.nvqs; i++) {
+			vq = &tmf->vhost->vqs[i].vq;
+
+			if (vhost_vq_is_setup(vq) &&
+			    vq->worker != ctl_vq->worker) {
+				vhost_vq_work_flush(vq);
+			}
+		}
 
-	if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE)
 		resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED;
-	else
+	} else {
 		resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+	}
 
 	vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
 				 tmf->vq_desc, &tmf->resp_iov, resp_code);
-- 
2.25.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-11-04 19:05 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-04 19:04 [PATCH V4 00/12] vhost: multiple worker support Mike Christie
2021-11-04 19:04 ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 01/12] vhost: add vhost_worker pointer to vhost_virtqueue Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 02/12] vhost, vhost-net: add helper to check if vq has work Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 03/12] vhost: take worker or vq instead of dev for queueing Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 04/12] vhost: take worker or vq instead of dev for flushing Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 05/12] vhost: convert poll work to be vq based Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 06/12] vhost-sock: convert to vq helpers Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 07/12] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` [PATCH V4 08/12] vhost-scsi: convert to vq helpers Mike Christie
2021-11-04 19:04   ` Mike Christie
2021-11-04 19:04 ` Mike Christie [this message]
2021-11-04 19:04   ` [PATCH V4 09/12] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2021-11-04 19:05 ` [PATCH V4 10/12] vhost: remove device wide queu/flushing helpers Mike Christie
2021-11-04 19:05   ` Mike Christie
2021-11-04 19:05 ` [PATCH V4 11/12] vhost: allow userspace to create workers Mike Christie
2021-11-04 19:05   ` Mike Christie
2021-11-04 19:05 ` [PATCH V4 12/12] vhost: allow worker attachment after initial setup Mike Christie
2021-11-04 19:05   ` Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211104190502.7053-10-michael.christie@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.