linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Kashyap Desai <kashyap.desai@broadcom.com>,
	Ming Lei <ming.lei@redhat.com>, <linux-scsi@vger.kernel.org>,
	<linux-block@vger.kernel.org>, Hannes Reinecke <hare@suse.com>
Cc: chenxiang <chenxiang66@hisilicon.com>, <luojiaxing@huawei.com>
Subject: Re: [bug report] scsi host hang when running fio
Date: Mon, 19 Apr 2021 16:11:23 +0100	[thread overview]
Message-ID: <2bd9adf9-7766-687a-2510-eb6a058f00d8@huawei.com> (raw)
In-Reply-To: <f934ca65fa55345c360c944dd0fc2239@mail.gmail.com>

Hi Kashyap,

> John - I have not seen such issue on megaraid_sas driver.

I could try to test megaraid SAS also, but the system with that card has 
only 1x SATA disk, so pointless really.

> Is this something
> to do with CPU lock up ?

Seems to be.

JFYI, Enabling configs RCU_EXPERT, DEBUG_ATOMIC_SLEEP, and 
DEBUG_SPINLOCK gives:

job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
fio-3.1
Starting 6 processes
[  196.342724] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ta 
01h:12m:22s]
[  196.348816] rcu:     Tasks blocked on level-1 rcu_node (CPUs 32-47):
[  196.354913] rcu: All QSes seen, last rcu_preempt kthread activity 1 
(4294941135-4294941134), jiffies_till_next_fqs=1, root ->qsmask 0x4
[  196.367089] BUG: sleeping function called from invalid context at 
include/linux/uaccess.h:174
[  196.375605] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 
1893, name: fio
[  196.383502] BUG: scheduling while atomic: fio/1893/0x00000004
[  196.389312] BUG: spinlock recursion on CPU#11, fio/1893
[  196.394527]  lock: rcu_state+0x280/0x2d00, .magic: dead4ead, .owner: 
fio/1893, .owner_cpu: 11
[  196.403046] CPU: 11 PID: 1893 Comm: fio Tainted: G W 
5.12.0-rc7-00001-g3ae18ff9e445 #219
[  196.412426] Hardware name: Huawei Taishan 2280 /D05, BIOS Hisilicon 
D05 IT21 Nemo 2.0 RC0 04/18/2018
[  196.421544] Call trace:
[  196.423977]  dump_backtrace+0x0/0x1b0
[  196.427629]  show_stack+0x18/0x68
[  196.430932]  dump_stack+0xd8/0x134
[  196.434322]  spin_dump+0x84/0x94
[  196.437539]  do_raw_spin_lock+0x108/0x120
[  196.441539]  _raw_spin_lock+0x20/0x30
[  196.445191]  rcu_note_context_switch+0xbc/0x348
[  196.449710]  __schedule+0xc8/0x6e8
[  196.453100]  preempt_schedule_notrace+0x50/0x70
[  196.457618]  __arm64_sys_io_submit+0x188/0x240
[  196.462051]  el0_svc_common.constprop.2+0x8c/0x128
[  196.466829]  do_el0_svc+0x24/0x90
[  196.470133]  el0_svc+0x24/0x38
[  196.473175]  el0_sync_handler+0x90/0xb8
[  196.476999]  el0_sync+0x154/0x180
^Cbs: 6 (f=6): [r(6)][4.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 
01h:11m:54s]
fio: terminating on signal 2

> Can you try your test with "rq_affinity=2" ? 

I cannot see the issue with this setting.

> megaraid_sas driver detect CPU
> lockup (flood of completion on single CPU) and it use irq_poll interface to
> avoid such loop.

Can you turn it off? I guess that this is what happens to me, but the 
system should not hang.

> Since you mentioned you noticed issue with hisi_sas v2 without hostwide tag
> I can think of similar stuffs in this case.
> 
> How cpus to irq affinity settled in your case. ? Is it 1-1 mapping ?

We have a 4-1 CPU-HW queue mapping.

Disabling CONFIG_PREEMPT makes the issue go away for me also, so it 
would be useful to try enabling it to recreate (if disabled), like:

  more .config| grep PREEMPT
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_DEBUG_PREEMPT is not set

Thanks,
John

> 
> Kashyap
> 
>>
>> scsi debug or null_blk don't seem to load the system heavily enough to
>> recreate.
>>
>> I have seen it on 5.11 also. I see it on hisi_sas v2 and v3 hw drivers,
>> And I don't
>> think it's related to hostwide tags, as for hisi_sas v2 hw driver, I unset
>> that flag
>> and can still see it.
>>
>> Thanks,
>> John
>>
>> [0]
>> https://lore.kernel.org/linux-scsi/89ebc37c-21d6-c57e-4267-
>> cac49a3e5953@huawei.com/T/#t


  reply	other threads:[~2021-04-19 15:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-19  9:51 [bug report] scsi host hang when running fio John Garry
2021-04-19 11:43 ` Kashyap Desai
2021-04-19 15:11   ` John Garry [this message]
2021-04-27  9:41   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2bd9adf9-7766-687a-2510-eb6a058f00d8@huawei.com \
    --to=john.garry@huawei.com \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=luojiaxing@huawei.com \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).