linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: John Garry <john.garry@huawei.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Douglas Gilbert <dgilbert@interlog.com>,
	Hannes Reinecke <hare@suse.com>
Subject: Re: [bug report] shared tags causes IO hang and performance drop
Date: Tue, 27 Apr 2021 17:52:58 +0800	[thread overview]
Message-ID: <YIfe+mpcV17XsHuL@T590> (raw)
In-Reply-To: <cb81d990-e5a6-49b1-5d96-8079a80c73f5@huawei.com>

On Tue, Apr 27, 2021 at 10:37:39AM +0100, John Garry wrote:
> On 27/04/2021 10:11, Ming Lei wrote:
> > On Tue, Apr 27, 2021 at 08:52:53AM +0100, John Garry wrote:
> > > On 27/04/2021 00:59, Ming Lei wrote:
> > > > > Anyway, I'll look at adding code for a per-request queue sched tags to see
> > > > > if it helps. But I would plan to continue to use a per hctx sched request
> > > > > pool.
> > > > Why not switch to per hctx sched request pool?
> > > I don't understand. The current code uses a per-hctx sched request pool, and
> > > I said that I don't plan to change that.
> > I forget why you didn't do that, because for hostwide tags, request
> > is always 1:1 for either sched tags(real io sched) or driver tags(none).
> > 
> > Maybe you want to keep request local to hctx, but never see related
> > performance data for supporting the point, sbitmap queue allocator has
> > been intelligent enough to allocate tag freed from native cpu.
> > 
> > Then you just waste lots of memory, I remember that scsi request payload
> > is a bit big.
> 
> It's true that we waste much memory for regular static requests for when
> using hostwide tags today.
> 
> One problem in trying to use a single set of "hostwide" static requests is
> that we call blk_mq_init_request(..., hctx_idx, ...) ->
> set->ops->init_request(.., hctx_idx, ...) for each static rq, and this would
> not work for a single set of "hostwide" requests.
> 
> And I see a similar problem for a "request queue-wide" sched static
> requests.
> 
> Maybe we can improve this in future.

OK, fair enough.

> 
> BTW, for the performance issue which Yanhui witnessed with megaraid sas, do
> you think it may because of the IO sched tags issue of total sched tag depth
> growing vs driver tags?

I think it is highly possible. Will you work a patch to convert to
per-request-queue sched tag?

> Are there lots of LUNs? I can imagine that megaraid
> sas has much larger can_queue than scsi_debug :)

No, there are just two LUNs, the 1st LUN is one commodity SSD(queue
depth is 32) and the performance issue is reported on this LUN, another is one
HDD(queue depth is 256) which is root disk, but the megaraid host tag depth is
228, another weird setting. But the issue still can be reproduced after we set
2nd LUN's depth as 64 for avoiding driver tag contention.



Thanks,
Ming


  reply	other threads:[~2021-04-27  9:53 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei [this message]
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIfe+mpcV17XsHuL@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dgilbert@interlog.com \
    --cc=hare@suse.com \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).