From: Mike Christie <michael.christie@oracle.com> To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org, stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com, sgarzare@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH V3 09/11] vhost-scsi: flush IO vqs then send TMF rsp Date: Fri, 22 Oct 2021 00:19:09 -0500 [thread overview] Message-ID: <20211022051911.108383-11-michael.christie@oracle.com> (raw) In-Reply-To: <20211022051911.108383-1-michael.christie@oracle.com> With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, one of the IO vq threads could be run after the TMF is queued, so this has us flush all the IO vqs before sending the TMF response. Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/scsi.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 08beba73ada4..29d9adcdb4fc 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1153,12 +1153,28 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work) { struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf, vwork); - int resp_code; + struct vhost_virtqueue *ctl_vq, *vq; + int resp_code, i; + + if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) { + /* + * Flush IO vqs that don't share a worker with the ctl to make + * sure they have sent their responses before us. + */ + ctl_vq = &tmf->vhost->vqs[VHOST_SCSI_VQ_CTL].vq; + for (i = VHOST_SCSI_VQ_IO; i < tmf->vhost->dev.nvqs; i++) { + vq = &tmf->vhost->vqs[i].vq; + + if (vhost_vq_is_setup(vq) && + vq->worker != ctl_vq->worker) { + vhost_vq_work_flush(vq); + } + } - if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED; - else + } else { resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED; + } vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs, tmf->vq_desc, &tmf->resp_iov, resp_code); -- 2.25.1 _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com> To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org, stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com, sgarzare@redhat.com, virtualization@lists.linux-foundation.org Cc: Mike Christie <michael.christie@oracle.com> Subject: [PATCH V3 09/11] vhost-scsi: flush IO vqs then send TMF rsp Date: Fri, 22 Oct 2021 00:19:09 -0500 [thread overview] Message-ID: <20211022051911.108383-11-michael.christie@oracle.com> (raw) In-Reply-To: <20211022051911.108383-1-michael.christie@oracle.com> With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, one of the IO vq threads could be run after the TMF is queued, so this has us flush all the IO vqs before sending the TMF response. Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/scsi.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 08beba73ada4..29d9adcdb4fc 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1153,12 +1153,28 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work) { struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf, vwork); - int resp_code; + struct vhost_virtqueue *ctl_vq, *vq; + int resp_code, i; + + if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) { + /* + * Flush IO vqs that don't share a worker with the ctl to make + * sure they have sent their responses before us. + */ + ctl_vq = &tmf->vhost->vqs[VHOST_SCSI_VQ_CTL].vq; + for (i = VHOST_SCSI_VQ_IO; i < tmf->vhost->dev.nvqs; i++) { + vq = &tmf->vhost->vqs[i].vq; + + if (vhost_vq_is_setup(vq) && + vq->worker != ctl_vq->worker) { + vhost_vq_work_flush(vq); + } + } - if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED; - else + } else { resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED; + } vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs, tmf->vq_desc, &tmf->resp_iov, resp_code); -- 2.25.1
next prev parent reply other threads:[~2021-10-22 5:19 UTC|newest] Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-10-22 5:18 [PATCH V3 00/11] vhost: multiple worker support Mike Christie 2021-10-22 5:18 ` Mike Christie 2021-10-22 5:19 ` [PATCH] QEMU vhost-scsi: add support for VHOST_SET_VRING_WORKER Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 01/11] vhost: add vhost_worker pointer to vhost_virtqueue Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 02/11] vhost, vhost-net: add helper to check if vq has work Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 03/11] vhost: take worker or vq instead of dev for queueing Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 04/11] vhost: take worker or vq instead of dev for flushing Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 05/11] vhost: convert poll work to be vq based Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 06/11] vhost-sock: convert to vq helpers Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-25 9:08 ` Stefano Garzarella 2021-10-25 9:08 ` Stefano Garzarella 2021-10-25 16:09 ` michael.christie 2021-10-25 16:09 ` michael.christie 2021-10-22 5:19 ` [PATCH V3 07/11] vhost-scsi: make SCSI cmd completion per vq Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 08/11] vhost-scsi: convert to vq helpers Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` Mike Christie [this message] 2021-10-22 5:19 ` [PATCH V3 09/11] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie 2021-10-22 5:19 ` [PATCH V3 10/11] vhost: remove device wide queu/flushing helpers Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 5:19 ` [PATCH V3 11/11] vhost: allow userspace to create workers Mike Christie 2021-10-22 5:19 ` Mike Christie 2021-10-22 10:47 ` Michael S. Tsirkin 2021-10-22 10:47 ` Michael S. Tsirkin 2021-10-22 16:12 ` michael.christie 2021-10-22 16:12 ` michael.christie 2021-10-22 18:17 ` michael.christie 2021-10-22 18:17 ` michael.christie 2021-10-23 20:11 ` Michael S. Tsirkin 2021-10-23 20:11 ` Michael S. Tsirkin 2021-10-25 16:04 ` michael.christie 2021-10-25 16:04 ` michael.christie 2021-10-25 17:14 ` Michael S. Tsirkin 2021-10-25 17:14 ` Michael S. Tsirkin 2021-10-26 5:37 ` Jason Wang 2021-10-26 5:37 ` Jason Wang 2021-10-26 13:09 ` Michael S. Tsirkin 2021-10-26 13:09 ` Michael S. Tsirkin 2021-10-26 16:36 ` Stefan Hajnoczi 2021-10-26 16:36 ` Stefan Hajnoczi 2021-10-26 15:44 ` Stefan Hajnoczi 2021-10-26 15:44 ` Stefan Hajnoczi 2021-10-27 2:55 ` Jason Wang 2021-10-27 2:55 ` Jason Wang 2021-10-27 9:01 ` Stefan Hajnoczi 2021-10-27 9:01 ` Stefan Hajnoczi 2021-10-26 16:49 ` michael.christie 2021-10-26 16:49 ` michael.christie 2021-10-27 6:02 ` Jason Wang 2021-10-27 6:02 ` Jason Wang 2021-10-27 9:03 ` Stefan Hajnoczi 2021-10-27 9:03 ` Stefan Hajnoczi 2021-10-26 15:22 ` Stefan Hajnoczi 2021-10-26 15:22 ` Stefan Hajnoczi 2021-10-26 15:24 ` Stefan Hajnoczi 2021-10-26 15:24 ` Stefan Hajnoczi 2021-10-22 6:02 ` [PATCH V3 00/11] vhost: multiple worker support michael.christie 2021-10-22 6:02 ` michael.christie 2021-10-22 9:49 ` Michael S. Tsirkin 2021-10-22 9:49 ` Michael S. Tsirkin 2021-10-22 9:48 ` Michael S. Tsirkin 2021-10-22 9:48 ` Michael S. Tsirkin 2021-10-22 15:54 ` michael.christie 2021-10-22 15:54 ` michael.christie 2021-10-23 20:12 ` Michael S. Tsirkin 2021-10-23 20:12 ` Michael S. Tsirkin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20211022051911.108383-11-michael.christie@oracle.com \ --to=michael.christie@oracle.com \ --cc=jasowang@redhat.com \ --cc=linux-scsi@vger.kernel.org \ --cc=mst@redhat.com \ --cc=pbonzini@redhat.com \ --cc=sgarzare@redhat.com \ --cc=stefanha@redhat.com \ --cc=target-devel@vger.kernel.org \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.