linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kashyap Desai <kashyap.desai@broadcom.com>
To: John Garry <john.garry@huawei.com>, Ming Lei <ming.lei@redhat.com>
Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Douglas Gilbert <dgilbert@interlog.com>
Subject: RE: [bug report] shared tags causes IO hang and performance drop
Date: Wed, 14 Apr 2021 16:12:22 +0530	[thread overview]
Message-ID: <cb326d404c6e0785d03a7dfadc42832c@mail.gmail.com> (raw)
In-Reply-To: <9a6145a5-e6ac-3d33-b52a-0823bfc3b864@huawei.com>

[-- Attachment #1: Type: text/plain, Size: 3164 bytes --]

> Hi Ming,
>
> >
> > It is reported inside RH that CPU utilization is increased ~20% when
> > running simple FIO test inside VM which disk is built on image stored
> > on XFS/megaraid_sas.
> >
> > When I try to investigate by reproducing the issue via scsi_debug, I
> > found IO hang when running randread IO(8k, direct IO, libaio) on
> > scsi_debug disk created by the following command:
> >
> > 	modprobe scsi_debug host_max_queue=128
> submit_queues=$NR_CPUS
> > virtual_gb=256
> >
>
> So I can recreate this hang for using mq-deadline IO sched for scsi debug,
> in
> that fio does not exit. I'm using v5.12-rc7.

I can also recreate this issue using mq-deadline. Using <none>, there is no
IO hang issue.
Also if I run script to change scheduler periodically (none, mq-deadline),
sysfs entry hangs.

Here is call trace-
Call Trace:
[ 1229.879862]  __schedule+0x29d/0x7a0
[ 1229.879871]  schedule+0x3c/0xa0
[ 1229.879875]  blk_mq_freeze_queue_wait+0x62/0x90
[ 1229.879880]  ? finish_wait+0x80/0x80
[ 1229.879884]  elevator_switch+0x12/0x40
[ 1229.879888]  elv_iosched_store+0x79/0x120
[ 1229.879892]  ? kernfs_fop_write_iter+0xc7/0x1b0
[ 1229.879897]  queue_attr_store+0x42/0x70
[ 1229.879901]  kernfs_fop_write_iter+0x11f/0x1b0
[ 1229.879905]  new_sync_write+0x11f/0x1b0
[ 1229.879912]  vfs_write+0x184/0x250
[ 1229.879915]  ksys_write+0x59/0xd0
[ 1229.879917]  do_syscall_64+0x33/0x40
[ 1229.879922]  entry_SYSCALL_64_after_hwframe+0x44/0xae


I tried both - 5.12.0-rc1 and 5.11.0-rc2+ and there is a same behavior.
Let me also check  megaraid_sas and see if anything generic or this is a
special case of scsi_debug.

>
> Do you have any idea of what changed to cause this, as we would have
> tested
> this before? Or maybe only none IO sched on scsi_debug. And normally 4k
> block size and only rw=read (for me, anyway).
>
> Note that host_max_queue=128 will cap submit queue depth at 128, while
> would be 192 by default.
>
> Will check more...including CPU utilization.
>
> Thanks,
> John
>
> > Looks it is caused by SCHED_RESTART because current RESTART is just
> > done on current hctx, and we may need to restart all hctxs for shared
> > tags, and the issue can be fixed by the append patch. However, IOPS
> > drops more than 10% with the patch.
> >
> > So any idea for this issue and the original performance drop?
> >
> > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index
> > e1e997af89a0..45188f7aa789 100644
> > --- a/block/blk-mq-sched.c
> > +++ b/block/blk-mq-sched.c
> > @@ -59,10 +59,18 @@
> EXPORT_SYMBOL_GPL(blk_mq_sched_mark_restart_hctx);
> >
> >   void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
> >   {
> > +	bool shared_tag = blk_mq_is_sbitmap_shared(hctx->flags);
> > +
> > +	if (shared_tag)
> > +		blk_mq_run_hw_queues(hctx->queue, true);
> > +
> >   	if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
> >   		return;
> >   	clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
> >
> > +	if (shared_tag)
> > +		return;
> > +
> >   	/*
> >   	 * Order clearing SCHED_RESTART and list_empty_careful(&hctx-
> >dispatch)
> >   	 * in blk_mq_run_hw_queue(). Its pair is the barrier in
> >
> > Thanks,
> > Ming
> >
> > .
> >

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

  parent reply	other threads:[~2021-04-14 10:42 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai [this message]
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb326d404c6e0785d03a7dfadc42832c@mail.gmail.com \
    --to=kashyap.desai@broadcom.com \
    --cc=axboe@kernel.dk \
    --cc=dgilbert@interlog.com \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).