linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Douglas Gilbert <dgilbert@interlog.com>,
	Hannes Reinecke <hare@suse.com>
Subject: Re: [bug report] shared tags causes IO hang and performance drop
Date: Tue, 27 Apr 2021 07:59:07 +0800	[thread overview]
Message-ID: <YIdTyyVE5azlYwtO@T590> (raw)
In-Reply-To: <9ad15067-ba7b-a335-ae71-8c4328856b91@huawei.com>

On Mon, Apr 26, 2021 at 06:02:31PM +0100, John Garry wrote:
> On 26/04/2021 17:03, Ming Lei wrote:
> > > For both hostwide and non-hostwide tags, we have standalone sched tags and
> > > request pool per hctx when q->nr_hw_queues > 1.
> > driver tags is shared for hostwide tags.
> > 
> > > > That is why you observe that scheduler tag exhaustion
> > > > is easy to trigger in case of non-hostwide tags.
> > > > 
> > > > I'd suggest to add one per-request-queue sched tags, and make all hctxs
> > > > sharing it, just like what you did for driver tag.
> > > > 
> > > That sounds reasonable.
> > > 
> > > But I don't see how this is related to hostwide tags specifically, but
> > > rather just having q->nr_hw_queues > 1, which NVMe PCI and some other SCSI
> > > MQ HBAs have (without using hostwide tags).
> > Before hostwide tags, the whole scheduler queue depth should be 256.
> > After hostwide tags, the whole scheduler queue depth becomes 256 *
> > nr_hw_queues. But the driver tag queue depth is_not_  changed.
> 
> Fine.
> 
> > 
> > More requests come and are tried to dispatch to LLD and can't succeed
> > because of limited driver tag depth, and CPU utilization could be increased.
> 
> Right, maybe this is a problem.
> 
> I quickly added some debug, and see that
> __blk_mq_get_driver_tag()->__sbitmap_queue_get() fails ~7% for hostwide tags
> and 3% for non-hostwide tags.
> 
> Having it fail at all for non-hostwide tags seems a bit dubious... here's
> the code for deciding the rq sched tag depth:
> 
> q->nr_requests = 2 * min(q->tags_set->queue_depth [128], BLK_DEV_MAX_RQ
> [128])
> 
> So we get 256 for our test scenario, which is appreciably bigger than
> q->tags_set->queue_depth, so the failures make sense.
> 
> Anyway, I'll look at adding code for a per-request queue sched tags to see
> if it helps. But I would plan to continue to use a per hctx sched request
> pool.

Why not switch to per hctx sched request pool?

Thanks,
Ming


  reply	other threads:[~2021-04-26 23:59 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei [this message]
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIdTyyVE5azlYwtO@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dgilbert@interlog.com \
    --cc=hare@suse.com \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).