linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [bug report] scsi host hang when running fio
@ 2021-04-19  9:51 John Garry
  2021-04-19 11:43 ` Kashyap Desai
  0 siblings, 1 reply; 4+ messages in thread
From: John Garry @ 2021-04-19  9:51 UTC (permalink / raw)
  To: Ming Lei, Kashyap Desai, linux-scsi, linux-block, Hannes Reinecke
  Cc: chenxiang, luojiaxing

Hi guys,

While investigating the performance issue reported by Ming [0], I am 
seeing this hang in certain scenarios:

tivated0KB /s] [0/0/0 iops] [eta 1158048815d:13h:31m:49s]
[ 740.499917] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ops] 
[eta 34722d:05h:17m:25s]
[ 740.505994] rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15):
[ 740.511982] (detected by 64, t=5255 jiffies, g=6105, q=6697)
[ 740.517703] rcu: All QSes seen, last rcu_preempt kthread activity 0 
(4295075897-4295075897), jiffies_till_next_fqs=1,
root ->qsmask 0x1
[ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
[ 740.729692] Modules linked in:
[ 740.732737] CPU: 64 PID: 0 Comm: swapper/64 Tainted: G W 
5.12.0-rc7-g7589ed97c1da-dirty #322
[ 740.742432] Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 
2280-V2 CS V5.B133.01 03/25/2021
[ 740.751264] Call trace:
[ 740.753699] dump_backtrace+0x0/0x1b0
[ 740.757353] show_stack+0x18/0x68
[ 740.760654] dump_stack+0xd8/0x134
[ 740.764046] __schedule_bug+0x60/0x78
[ 740.767694] __schedule+0x620/0x6d8
[ 740.771168] schedule_idle+0x20/0x40
[ 740.774730] do_idle+0x19c/0x278
[ 740.777945] cpu_startup_entry+0x24/0x68
[ 740.781850] secondary_start_kernel+0x178/0x188
[ 740.786362] 0x0
^Cbs: 12 (f=12): [r(12)] [0.0% done] [1626MB/0KB/0KB /s] [416K/0/0 iops] 
[eta 34722d:05h:16m:28s]
fio: terminating on signal 2

I thought it merited a separate thread.

[ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
Looks bad ...

The scenario to create seems to be running fio with rw=randread and 
mq-deadline IO scheduler. And heavily loading the system - running fio 
on a subset of available CPUs seems to help (recreate).

When it occurs, the system becomes totally unresponsive.

It could be a LLDD bug, but I am doubtful.

Has anyone else seen this or help try to recreate?

scsi debug or null_blk don't seem to load the system heavily enough to 
recreate.

I have seen it on 5.11 also. I see it on hisi_sas v2 and v3 hw drivers, 
And I don't think it's related to hostwide tags, as for hisi_sas v2 hw 
driver, I unset that flag and can still see it.

Thanks,
John

[0] 
https://lore.kernel.org/linux-scsi/89ebc37c-21d6-c57e-4267-cac49a3e5953@huawei.com/T/#t

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [bug report] scsi host hang when running fio
  2021-04-19  9:51 [bug report] scsi host hang when running fio John Garry
@ 2021-04-19 11:43 ` Kashyap Desai
  2021-04-19 15:11   ` John Garry
  2021-04-27  9:41   ` John Garry
  0 siblings, 2 replies; 4+ messages in thread
From: Kashyap Desai @ 2021-04-19 11:43 UTC (permalink / raw)
  To: John Garry, Ming Lei, linux-scsi, linux-block, Hannes Reinecke
  Cc: chenxiang, luojiaxing

[-- Attachment #1: Type: text/plain, Size: 2807 bytes --]

> Hi guys,
>
> While investigating the performance issue reported by Ming [0], I am
> seeing
> this hang in certain scenarios:
>
> tivated0KB /s] [0/0/0 iops] [eta 1158048815d:13h:31m:49s] [ 740.499917]
> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ops] [eta
> 34722d:05h:17m:25s] [ 740.505994] rcu: Tasks blocked on level-1 rcu_node
> (CPUs 0-15):
> [ 740.511982] (detected by 64, t=5255 jiffies, g=6105, q=6697) [
> 740.517703]
> rcu: All QSes seen, last rcu_preempt kthread activity 0 (4295075897-
> 4295075897), jiffies_till_next_fqs=1, root ->qsmask 0x1 [ 740.723625] BUG:
> scheduling while atomic: swapper/64/0/0x00000008 [ 740.729692] Modules
> linked in:
> [ 740.732737] CPU: 64 PID: 0 Comm: swapper/64 Tainted: G W 5.12.0-rc7-
> g7589ed97c1da-dirty #322 [ 740.742432] Hardware name: Huawei TaiShan
> 2280 V2/BC82AMDC, BIOS
> 2280-V2 CS V5.B133.01 03/25/2021
> [ 740.751264] Call trace:
> [ 740.753699] dump_backtrace+0x0/0x1b0
> [ 740.757353] show_stack+0x18/0x68
> [ 740.760654] dump_stack+0xd8/0x134
> [ 740.764046] __schedule_bug+0x60/0x78
> [ 740.767694] __schedule+0x620/0x6d8
> [ 740.771168] schedule_idle+0x20/0x40
> [ 740.774730] do_idle+0x19c/0x278
> [ 740.777945] cpu_startup_entry+0x24/0x68 [ 740.781850]
> secondary_start_kernel+0x178/0x188
> [ 740.786362] 0x0
> ^Cbs: 12 (f=12): [r(12)] [0.0% done] [1626MB/0KB/0KB /s] [416K/0/0 iops]
> [eta
> 34722d:05h:16m:28s]
> fio: terminating on signal 2
>
> I thought it merited a separate thread.
>
> [ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
> Looks bad ...
>
> The scenario to create seems to be running fio with rw=randread and mq-
> deadline IO scheduler. And heavily loading the system - running fio on a
> subset of available CPUs seems to help (recreate).
>
> When it occurs, the system becomes totally unresponsive.
>
> It could be a LLDD bug, but I am doubtful.
>
> Has anyone else seen this or help try to recreate?

John - I have not seen such issue on megaraid_sas driver. Is this something
to do with CPU lock up ?
Can you try your test with "rq_affinity=2" ? megaraid_sas driver detect CPU
lockup (flood of completion on single CPU) and it use irq_poll interface to
avoid such loop.
Since you mentioned you noticed issue with hisi_sas v2 without hostwide tag
I can think of similar stuffs in this case.

How cpus to irq affinity settled in your case. ? Is it 1-1 mapping ?

Kashyap

>
> scsi debug or null_blk don't seem to load the system heavily enough to
> recreate.
>
> I have seen it on 5.11 also. I see it on hisi_sas v2 and v3 hw drivers,
> And I don't
> think it's related to hostwide tags, as for hisi_sas v2 hw driver, I unset
> that flag
> and can still see it.
>
> Thanks,
> John
>
> [0]
> https://lore.kernel.org/linux-scsi/89ebc37c-21d6-c57e-4267-
> cac49a3e5953@huawei.com/T/#t

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report] scsi host hang when running fio
  2021-04-19 11:43 ` Kashyap Desai
@ 2021-04-19 15:11   ` John Garry
  2021-04-27  9:41   ` John Garry
  1 sibling, 0 replies; 4+ messages in thread
From: John Garry @ 2021-04-19 15:11 UTC (permalink / raw)
  To: Kashyap Desai, Ming Lei, linux-scsi, linux-block, Hannes Reinecke
  Cc: chenxiang, luojiaxing

Hi Kashyap,

> John - I have not seen such issue on megaraid_sas driver.

I could try to test megaraid SAS also, but the system with that card has 
only 1x SATA disk, so pointless really.

> Is this something
> to do with CPU lock up ?

Seems to be.

JFYI, Enabling configs RCU_EXPERT, DEBUG_ATOMIC_SLEEP, and 
DEBUG_SPINLOCK gives:

job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
job1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=128
fio-3.1
Starting 6 processes
[  196.342724] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ta 
01h:12m:22s]
[  196.348816] rcu:     Tasks blocked on level-1 rcu_node (CPUs 32-47):
[  196.354913] rcu: All QSes seen, last rcu_preempt kthread activity 1 
(4294941135-4294941134), jiffies_till_next_fqs=1, root ->qsmask 0x4
[  196.367089] BUG: sleeping function called from invalid context at 
include/linux/uaccess.h:174
[  196.375605] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 
1893, name: fio
[  196.383502] BUG: scheduling while atomic: fio/1893/0x00000004
[  196.389312] BUG: spinlock recursion on CPU#11, fio/1893
[  196.394527]  lock: rcu_state+0x280/0x2d00, .magic: dead4ead, .owner: 
fio/1893, .owner_cpu: 11
[  196.403046] CPU: 11 PID: 1893 Comm: fio Tainted: G W 
5.12.0-rc7-00001-g3ae18ff9e445 #219
[  196.412426] Hardware name: Huawei Taishan 2280 /D05, BIOS Hisilicon 
D05 IT21 Nemo 2.0 RC0 04/18/2018
[  196.421544] Call trace:
[  196.423977]  dump_backtrace+0x0/0x1b0
[  196.427629]  show_stack+0x18/0x68
[  196.430932]  dump_stack+0xd8/0x134
[  196.434322]  spin_dump+0x84/0x94
[  196.437539]  do_raw_spin_lock+0x108/0x120
[  196.441539]  _raw_spin_lock+0x20/0x30
[  196.445191]  rcu_note_context_switch+0xbc/0x348
[  196.449710]  __schedule+0xc8/0x6e8
[  196.453100]  preempt_schedule_notrace+0x50/0x70
[  196.457618]  __arm64_sys_io_submit+0x188/0x240
[  196.462051]  el0_svc_common.constprop.2+0x8c/0x128
[  196.466829]  do_el0_svc+0x24/0x90
[  196.470133]  el0_svc+0x24/0x38
[  196.473175]  el0_sync_handler+0x90/0xb8
[  196.476999]  el0_sync+0x154/0x180
^Cbs: 6 (f=6): [r(6)][4.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 
01h:11m:54s]
fio: terminating on signal 2

> Can you try your test with "rq_affinity=2" ? 

I cannot see the issue with this setting.

> megaraid_sas driver detect CPU
> lockup (flood of completion on single CPU) and it use irq_poll interface to
> avoid such loop.

Can you turn it off? I guess that this is what happens to me, but the 
system should not hang.

> Since you mentioned you noticed issue with hisi_sas v2 without hostwide tag
> I can think of similar stuffs in this case.
> 
> How cpus to irq affinity settled in your case. ? Is it 1-1 mapping ?

We have a 4-1 CPU-HW queue mapping.

Disabling CONFIG_PREEMPT makes the issue go away for me also, so it 
would be useful to try enabling it to recreate (if disabled), like:

  more .config| grep PREEMPT
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_DEBUG_PREEMPT is not set

Thanks,
John

> 
> Kashyap
> 
>>
>> scsi debug or null_blk don't seem to load the system heavily enough to
>> recreate.
>>
>> I have seen it on 5.11 also. I see it on hisi_sas v2 and v3 hw drivers,
>> And I don't
>> think it's related to hostwide tags, as for hisi_sas v2 hw driver, I unset
>> that flag
>> and can still see it.
>>
>> Thanks,
>> John
>>
>> [0]
>> https://lore.kernel.org/linux-scsi/89ebc37c-21d6-c57e-4267-
>> cac49a3e5953@huawei.com/T/#t


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report] scsi host hang when running fio
  2021-04-19 11:43 ` Kashyap Desai
  2021-04-19 15:11   ` John Garry
@ 2021-04-27  9:41   ` John Garry
  1 sibling, 0 replies; 4+ messages in thread
From: John Garry @ 2021-04-27  9:41 UTC (permalink / raw)
  To: Kashyap Desai, Ming Lei, linux-scsi, linux-block, Hannes Reinecke
  Cc: chenxiang, luojiaxing

On 19/04/2021 12:43, Kashyap Desai wrote:
>> Hi guys,
>>
>> While investigating the performance issue reported by Ming [0], I am
>> seeing
>> this hang in certain scenarios:
>>
>> tivated0KB /s] [0/0/0 iops] [eta 1158048815d:13h:31m:49s] [ 740.499917]
>> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ops] [eta
>> 34722d:05h:17m:25s] [ 740.505994] rcu: Tasks blocked on level-1 rcu_node
>> (CPUs 0-15):
>> [ 740.511982] (detected by 64, t=5255 jiffies, g=6105, q=6697) [
>> 740.517703]
>> rcu: All QSes seen, last rcu_preempt kthread activity 0 (4295075897-
>> 4295075897), jiffies_till_next_fqs=1, root ->qsmask 0x1 [ 740.723625] BUG:
>> scheduling while atomic: swapper/64/0/0x00000008 [ 740.729692] Modules
>> linked in:
>> [ 740.732737] CPU: 64 PID: 0 Comm: swapper/64 Tainted: G W 5.12.0-rc7-
>> g7589ed97c1da-dirty #322 [ 740.742432] Hardware name: Huawei TaiShan
>> 2280 V2/BC82AMDC, BIOS
>> 2280-V2 CS V5.B133.01 03/25/2021
>> [ 740.751264] Call trace:
>> [ 740.753699] dump_backtrace+0x0/0x1b0
>> [ 740.757353] show_stack+0x18/0x68
>> [ 740.760654] dump_stack+0xd8/0x134
>> [ 740.764046] __schedule_bug+0x60/0x78
>> [ 740.767694] __schedule+0x620/0x6d8
>> [ 740.771168] schedule_idle+0x20/0x40
>> [ 740.774730] do_idle+0x19c/0x278
>> [ 740.777945] cpu_startup_entry+0x24/0x68 [ 740.781850]
>> secondary_start_kernel+0x178/0x188
>> [ 740.786362] 0x0
>> ^Cbs: 12 (f=12): [r(12)] [0.0% done] [1626MB/0KB/0KB /s] [416K/0/0 iops]
>> [eta
>> 34722d:05h:16m:28s]
>> fio: terminating on signal 2
>>
>> I thought it merited a separate thread.
>>
>> [ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
>> Looks bad ...
>>
>> The scenario to create seems to be running fio with rw=randread and mq-
>> deadline IO scheduler. And heavily loading the system - running fio on a
>> subset of available CPUs seems to help (recreate).
>>
>> When it occurs, the system becomes totally unresponsive.
>>
>> It could be a LLDD bug, but I am doubtful.
>>
>> Has anyone else seen this or help try to recreate?
> John - I have not seen such issue on megaraid_sas driver. Is this something
> to do with CPU lock up ?

JFYI, this appears to be an issue of combination of threaded irq handler 
and managed interrupts. I raised the issue with Thomas:

https://lore.kernel.org/lkml/874kfxw9zv.ffs@nanos.tec.linutronix.de/

Maybe NVMe PCI could have the same issue.

Thanks,
John



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-04-27  9:45 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-19  9:51 [bug report] scsi host hang when running fio John Garry
2021-04-19 11:43 ` Kashyap Desai
2021-04-19 15:11   ` John Garry
2021-04-27  9:41   ` John Garry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).