linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Christie <michael.christie@oracle.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	jasowang@redhat.com, pbonzini@redhat.com, stefanha@redhat.com,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session
Date: Thu, 24 Sep 2020 10:31:30 -0500	[thread overview]
Message-ID: <BE3F7F89-D0BF-44BA-921A-112E104C96C5@oracle.com> (raw)
In-Reply-To: <20200924022107-mutt-send-email-mst@kernel.org>



> On Sep 24, 2020, at 1:22 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> 
> On Mon, Sep 21, 2020 at 01:23:03PM -0500, Mike Christie wrote:
>> We currently are limited to 256 cmds per session. This leads to problems
>> where if the user has increased virtqueue_size to more than 2 or
>> cmd_per_lun to more than 256 vhost_scsi_get_tag can fail and the guest
>> will get IO errors.
>> 
>> This patch moves the cmd allocation to per vq so we can easily match
>> whatever the user has specified for num_queues and
>> virtqueue_size/cmd_per_lun. It also makes it easier to control how much
>> memory we preallocate. For cases, where perf is not as important and
>> we can use the current defaults (1 vq and 128 cmds per vq) memory use
>> from preallocate cmds is cut in half. For cases, where we are willing
>> to use more memory for higher perf, cmd mem use will now increase as
>> the num queues and queue depth increases.
>> 
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>> ---
>> drivers/vhost/scsi.c | 204 ++++++++++++++++++++++++++++++++-------------------
>> 1 file changed, 127 insertions(+), 77 deletions(-)
>> 
>> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
>> index b22adf0..13311b8 100644
>> --- a/drivers/vhost/scsi.c
>> +++ b/drivers/vhost/scsi.c
>> @@ -52,7 +52,6 @@
>> #define VHOST_SCSI_VERSION  "v0.1"
>> #define VHOST_SCSI_NAMELEN 256
>> #define VHOST_SCSI_MAX_CDB_SIZE 32
>> -#define VHOST_SCSI_DEFAULT_TAGS 256
>> #define VHOST_SCSI_PREALLOC_SGLS 2048
>> #define VHOST_SCSI_PREALLOC_UPAGES 2048
>> #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048
>> @@ -189,6 +188,9 @@ struct vhost_scsi_virtqueue {
>> 	 * Writers must also take dev mutex and flush under it.
>> 	 */
>> 	int inflight_idx;
>> +	struct vhost_scsi_cmd *scsi_cmds;
>> +	struct sbitmap scsi_tags;
>> +	int max_cmds;
>> };
>> 
>> struct vhost_scsi {
>> @@ -324,7 +326,9 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>> {
>> 	struct vhost_scsi_cmd *tv_cmd = container_of(se_cmd,
>> 				struct vhost_scsi_cmd, tvc_se_cmd);
>> -	struct se_session *se_sess = tv_cmd->tvc_nexus->tvn_se_sess;
>> +	struct vhost_scsi_virtqueue *svq = container_of(tv_cmd->tvc_vq,
>> +				struct vhost_scsi_virtqueue, vq);
>> +	struct vhost_scsi_inflight *inflight = tv_cmd->inflight;
>> 	int i;
>> 
>> 	if (tv_cmd->tvc_sgl_count) {
>> @@ -336,8 +340,8 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd)
>> 			put_page(sg_page(&tv_cmd->tvc_prot_sgl[i]));
>> 	}
>> 
>> -	vhost_scsi_put_inflight(tv_cmd->inflight);
>> -	target_free_tag(se_sess, se_cmd);
>> +	sbitmap_clear_bit(&svq->scsi_tags, se_cmd->map_tag);
>> +	vhost_scsi_put_inflight(inflight);
>> }
>> 
>> static u32 vhost_scsi_sess_get_index(struct se_session *se_sess)
>> @@ -566,13 +570,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>> }
>> 
>> static struct vhost_scsi_cmd *
>> -vhost_scsi_get_tag(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
>> +vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
>> 		   unsigned char *cdb, u64 scsi_tag, u16 lun, u8 task_attr,
>> 		   u32 exp_data_len, int data_direction)
>> {
>> +	struct vhost_scsi_virtqueue *svq = container_of(vq,
>> +					struct vhost_scsi_virtqueue, vq);
>> 	struct vhost_scsi_cmd *cmd;
>> 	struct vhost_scsi_nexus *tv_nexus;
>> -	struct se_session *se_sess;
>> 	struct scatterlist *sg, *prot_sg;
>> 	struct page **pages;
>> 	int tag, cpu;
>> @@ -582,15 +587,14 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
>> 		pr_err("Unable to locate active struct vhost_scsi_nexus\n");
>> 		return ERR_PTR(-EIO);
>> 	}
>> -	se_sess = tv_nexus->tvn_se_sess;
>> 
>> -	tag = sbitmap_queue_get(&se_sess->sess_tag_pool, &cpu);
>> +	tag = sbitmap_get(&svq->scsi_tags, 0, false);
>> 	if (tag < 0) {
>> 		pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
>> 		return ERR_PTR(-ENOMEM);
>> 	}
> 
> 
> After this change, cpu is uninitialized.

I’ve fixed this.

We don’t use the cmd’s map_cpu field anymore, so I have deleted it and the cpu var above.

  reply	other threads:[~2020-09-24 15:33 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-21 18:23 [PATCH 0/8] vhost scsi: fixes and cleanups Mike Christie
2020-09-21 18:23 ` [PATCH 1/8] vhost vdpa: fix vhost_vdpa_open error handling Mike Christie
2020-09-22  1:58   ` Jason Wang
2020-09-21 18:23 ` [PATCH 2/8] vhost: add helper to check if a vq has been setup Mike Christie
2020-09-22  2:02   ` Jason Wang
2020-09-23 19:12     ` Mike Christie
2020-09-24  7:22       ` Jason Wang
2020-09-22  2:45   ` Bart Van Assche
2020-09-22 15:34     ` Michael Christie
2020-09-21 18:23 ` [PATCH 3/8] vhost scsi: alloc cmds per vq instead of session Mike Christie
2020-09-22  1:32   ` kernel test robot
2020-09-23  7:53   ` Dan Carpenter
2020-09-24  6:22   ` Michael S. Tsirkin
2020-09-24 15:31     ` Michael Christie [this message]
2020-09-21 18:23 ` [PATCH 4/8] vhost scsi: fix cmd completion race Mike Christie
2020-09-22  2:48   ` Bart Van Assche
2020-09-22  2:51     ` Bart Van Assche
2020-09-21 18:23 ` [PATCH 5/8] vhost scsi: add lun parser helper Mike Christie
2020-09-23 10:23   ` Paolo Bonzini
2020-09-21 18:23 ` [PATCH 6/8] vhost scsi: Add support for LUN resets Mike Christie
2020-09-21 18:23 ` [PATCH 7/8] vhost: remove work arg from vhost_work_flush Mike Christie
2020-09-22  2:01   ` Jason Wang
2020-09-21 18:23 ` [PATCH 8/8] vhost scsi: remove extra flushes Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BE3F7F89-D0BF-44BA-921A-112E104C96C5@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).