linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Douglas Gilbert <dgilbert@interlog.com>
To: John Garry <john.garry@huawei.com>, Ming Lei <ming.lei@redhat.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: [bug report] shared tags causes IO hang and performance drop
Date: Mon, 19 Apr 2021 23:06:34 -0400	[thread overview]
Message-ID: <ccdaee0e-3824-927c-8647-e8f44c1557dc@interlog.com> (raw)
In-Reply-To: <89ebc37c-21d6-c57e-4267-cac49a3e5953@huawei.com>

On 2021-04-16 10:59 a.m., John Garry wrote:
> On 16/04/2021 09:39, Ming Lei wrote:
>> On Fri, Apr 16, 2021 at 09:29:37AM +0100, John Garry wrote:
>>> On 16/04/2021 01:46, Ming Lei wrote:
>>>>> I can't seem to recreate your same issue. Are you mainline defconfig, or a
>>>>> special disto config?
>>>> The config is rhel8 config.
>>>>
>>> Can you share that? Has anyone tested against mainline x86 config?
>> Sure, see the attachment.
> 
> Thanks. I assume that this is not seen on mainline x86 defconfig.
> 
> Unfortunately it's anything but easy for me to install an x86 kernel ATM.
> 
> And I am still seeing this on hisi_sas v2 hw with 5.12-rc7:
> 
> [  214.448368] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> [  214.454468] rcu:Tasks blocked on level-1 rcu_node (CPUs 0-15):
> [  214.460474]  (detected by 40, t=5255 jiffies, g=2229, q=1110)
> [  214.466208] rcu: All QSes seen, last rcu_preempt kthread activity 1 
> (4294945760-4294945759), jiffies_till_next_fqs=1, root ->qsmask 0x1
> [  214.478466] BUG: scheduling while atomic: irq/151-hisi_sa/503/0x00000004
> [  214.485162] Modules linked in:
> [  214.488208] CPU: 40 PID: 503 Comm: irq/151-hisi_sa Not tainted 5.11.0 #75
> [  214.494985] Hardware name: Huawei Taishan 2280 /D05, BIOS Hisilicon D05 IT21 
> Nemo 2.0 RC0 04/18/2018
> [  214.504105] Call trace:
> [  214.506540]  dump_backtrace+0x0/0x1b0
> [  214.510208]  show_stack+0x18/0x68
> [  214.513513]  dump_stack+0xd8/0x134
> [  214.516907]  __schedule_bug+0x60/0x78
> [  214.520560]  __schedule+0x620/0x6d8
> [  214.524039]  schedule+0x70/0x108
> [  214.527256]  irq_thread+0xdc/0x230
> [  214.530648]  kthread+0x154/0x158
> [  214.533866]  ret_from_fork+0x10/0x30
> john@ubuntu:~$
> 
> For rw=randread and mq-deadline only, it seems. v5.11 has the same. Not sure if 
> this is a driver or other issue.
> 
> Today I don't have access to other boards with enough disks to get a decent 
> throughput to test against :(

I have always suspected under extreme pressure the block layer (or scsi
mid-level) does strange things, like an IO hang, attempts to prove that
usually lead back to my own code :-). But I have one example recently
where upwards of 10 commands had been submitted (blk_execute_rq_nowait())
and the following one stalled (all on the same thread). Seconds later
those 10 commands reported DID_TIME_OUT, the stalled thread awoke, and
my dd variant went to its conclusion (reporting 10 errors). Following
copies showed no ill effects.

My weapons of choice are sg_dd, actually sgh_dd and sg_mrq_dd. Those last
two monitor for stalls during the copy. Each submitted READ and WRITE
command gets its pack_id from an incrementing atomic and a management
thread in those copies checks every 300 milliseconds that that atomic
value is greater than the previous check. If not, dump the state of the
sg driver. The stalled request was in busy state with a timeout of 1
nanosecond which indicated that blk_execute_rq_nowait() had not been
called. So the chief suspect would be blk_get_request() followed by
the bio setup calls IMO.

So it certainly looked like an IO hang, not a locking, resource nor
corruption issue IMO. That was with a branch off MKP's
5.13/scsi-staging branch taken a few weeks back. So basically
lk 5.12.0-rc1 .

Doug Gilbert



  reply	other threads:[~2021-04-20  3:06 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert [this message]
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ccdaee0e-3824-927c-8647-e8f44c1557dc@interlog.com \
    --to=dgilbert@interlog.com \
    --cc=axboe@kernel.dk \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).