All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Douglas Gilbert <dgilbert@interlog.com>,
	Hannes Reinecke <hare@suse.com>
Subject: Re: [bug report] shared tags causes IO hang and performance drop
Date: Tue, 27 Apr 2021 00:03:43 +0800	[thread overview]
Message-ID: <YIbkX2G0+dp3PV+u@T590> (raw)
In-Reply-To: <55743a51-4d6f-f481-cebf-e2af9c657911@huawei.com>

On Mon, Apr 26, 2021 at 04:52:28PM +0100, John Garry wrote:
> On 26/04/2021 15:48, Ming Lei wrote:
> > >       --0.56%--sbitmap_get
> > > 
> > > I don't see this for hostwide tags - this may be because we have multiple
> > > hctx, and the IO sched tags are per hctx, so less chance of exhaustion. But
> > > this is not from hostwide tags specifically, but for multiple HW queues in
> > > general. As I understood, sched tags were meant to be per request queue,
> > > right? I am reading this correctly?
> > sched tags is still per-hctx.
> > 
> > I just found that you didn't change sched tags into per-request-queue
> > shared tags. Then for hostwide tags, each hctx still has its own
> > standalone sched tags and request pool, that is one big difference with
> > non hostwide tags.
> 
> For both hostwide and non-hostwide tags, we have standalone sched tags and
> request pool per hctx when q->nr_hw_queues > 1.

driver tags is shared for hostwide tags.

> 
> > That is why you observe that scheduler tag exhaustion
> > is easy to trigger in case of non-hostwide tags.
> > 
> > I'd suggest to add one per-request-queue sched tags, and make all hctxs
> > sharing it, just like what you did for driver tag.
> > 
> 
> That sounds reasonable.
> 
> But I don't see how this is related to hostwide tags specifically, but
> rather just having q->nr_hw_queues > 1, which NVMe PCI and some other SCSI
> MQ HBAs have (without using hostwide tags).

Before hostwide tags, the whole scheduler queue depth should be 256.
After hostwide tags, the whole scheduler queue depth becomes 256 *
nr_hw_queues. But the driver tag queue depth is _not_ changed.

More requests come and are tried to dispatch to LLD and can't succeed
because of limited driver tag depth, and CPU utilization could be increased.

Thanks,
Ming


  reply	other threads:[~2021-04-26 16:03 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei [this message]
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIbkX2G0+dp3PV+u@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dgilbert@interlog.com \
    --cc=hare@suse.com \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.