From: Ming Lei <ming.lei@redhat.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>,
linux-scsi@vger.kernel.org,
"Martin K . Petersen" <martin.petersen@oracle.com>,
"Ewan D . Milne" <emilne@redhat.com>,
Kashyap Desai <kashyap.desai@broadcom.com>,
Hannes Reinecke <hare@suse.de>, Long Li <longli@microsoft.com>,
John Garry <john.garry@huawei.com>,
linux-block@vger.kernel.org
Subject: Re: [PATCH V4] scsi: core: only re-run queue in scsi_end_request() if device queue is busy
Date: Wed, 2 Sep 2020 15:01:55 +0800 [thread overview]
Message-ID: <20200902070155.GD317674@T590> (raw)
In-Reply-To: <93faff01-daf7-4805-edc6-9101495686ce@acm.org>
On Tue, Sep 01, 2020 at 07:40:54PM -0700, Bart Van Assche wrote:
> On 2020-08-17 03:08, Ming Lei wrote:
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 7c6dd6f75190..a62c29058d26 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -551,8 +551,27 @@ static void scsi_run_queue_async(struct scsi_device *sdev)
> > if (scsi_target(sdev)->single_lun ||
> > !list_empty(&sdev->host->starved_list))
> > kblockd_schedule_work(&sdev->requeue_work);
> > - else
> > - blk_mq_run_hw_queues(sdev->request_queue, true);
> > + else {
>
> Has this patch been verified with checkpatch? Checkpatch should have warned
> about the unbalanced braces.
[linux]$ ./scripts/checkpatch.pl -g HEAD
total: 0 errors, 0 warnings, 71 lines checked
Commit 0cbe51645b54 ("scsi: core: only re-run queue in scsi_end_request() if device queue is busy") has no obvious style problems and is ready for submission.
>
> > + /*
> > + * smp_mb() implied in either rq->end_io or blk_mq_free_request
> > + * is for ordering writing .device_busy in scsi_device_unbusy()
> > + * and reading sdev->restarts.
> > + */
>
> Hmm ... I don't see what orders the atomic_dec(&sdev->device_busy) from
> scsi_device_unbusy() and the atomic_read() below? I don't think that the block
> layer guarantees ordering of these two memory accesses since both accesses
> happen in the request completion path.
__blk_mq_end_request() is called between scsi_device_unbusy() and
scsi_run_queue_async(). When __blk_mq_end_request() is called, this
request is actually ended really because SCMD_STATE_COMPLETE is covered
race between timeout and normal completion, so:
1) either __blk_mq_free_request() is called, smp_mb__after_atomic() is
implied in sbitmap_queue_clear() called from blk_mq_put_tag()
2) or rq->end_io() is called. We don't have too many ->end_io()
implemented. Either wake_up_process() or blk_mq_free_request() is called
in ->end_io(), so memory barrier is implied.
>
> > + int old = atomic_read(&sdev->restarts);
> > +
> > + if (old) {
> > + /*
> > + * ->restarts has to be kept as non-zero if there is
> > + * new budget contention comes.
>
> There are two verbs in the above sentence ("is" and "comes"). Please remove
> "comes" such that the sentence becomes grammatically correct.
>
> > + *
> > + * No need to run queue when either another re-run
> > + * queue wins in updating ->restarts or one new budget
> > + * contention comes.
> > + */
> > + if (atomic_cmpxchg(&sdev->restarts, old, 0) == old)
> > + blk_mq_run_hw_queues(sdev->request_queue, true);
> > + }
> > + }
>
> Please combine the two if-statements into a single if-statement using "&&"
> to keep the indentation level low.
>
> > @@ -1611,8 +1630,34 @@ static void scsi_mq_put_budget(struct request_queue *q)
> > static bool scsi_mq_get_budget(struct request_queue *q)
> > {
> > struct scsi_device *sdev = q->queuedata;
> > + int ret = scsi_dev_queue_ready(q, sdev);
> > +
> > + if (ret)
> > + return true;
> > +
> > + atomic_inc(&sdev->restarts);
> >
> > - return scsi_dev_queue_ready(q, sdev);
> > + /*
> > + * Order writing .restarts and reading .device_busy, and make sure
> > + * .restarts is visible to scsi_end_request(). Its pair is implied by
> > + * __blk_mq_end_request() in scsi_end_request() for ordering
> > + * writing .device_busy in scsi_device_unbusy() and reading .restarts.
> > + *
> > + */
> > + smp_mb__after_atomic();
>
> Barriers do not guarantee "is visible to". Barriers enforce ordering of memory
> accesses performed by a certain CPU core. Did you perhaps mean that
> sdev->restarts must be incremented before the code below reads sdev->device busy?
Right, ->restart has to be incremented before reading sdev->device_busy.
Thanks,
Ming
next prev parent reply other threads:[~2020-09-02 7:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-17 10:08 [PATCH V4] scsi: core: only re-run queue in scsi_end_request() if device queue is busy Ming Lei
2020-08-20 14:22 ` John Garry
2020-09-01 1:18 ` Long Li
2020-09-01 2:38 ` Martin K. Petersen
2020-09-01 6:54 ` Hannes Reinecke
2020-09-02 2:40 ` Bart Van Assche
2020-09-02 7:01 ` Ming Lei [this message]
2020-09-08 19:48 ` Ewan D. Milne
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200902070155.GD317674@T590 \
--to=ming.lei@redhat.com \
--cc=James.Bottomley@hansenpartnership.com \
--cc=bvanassche@acm.org \
--cc=emilne@redhat.com \
--cc=hare@suse.de \
--cc=john.garry@huawei.com \
--cc=kashyap.desai@broadcom.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=longli@microsoft.com \
--cc=martin.petersen@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).