linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Hung tasks with multiple partitions
@ 2020-01-30 19:34 Salman Qazi
  2020-01-30 20:49 ` Bart Van Assche
  0 siblings, 1 reply; 22+ messages in thread
From: Salman Qazi @ 2020-01-30 19:34 UTC (permalink / raw)
  To: Jens Axboe, Linux Kernel Mailing List, linux-block
  Cc: Jesse Barnes, Gwendal Grignou

Hi,

I am writing on behalf of the Chromium OS team at Google.  We found
the root cause for some hung tasks we were experiencing and we would
like to get your opinion on potential solutions.  The bugs were
encountered on 4.19 kernel.
However my reading of the code suggests that the relevant portions of the
code have not changed since then.

We have an eMMC flash drive that has been carved into partitions on an
8 CPU system.  The repro case that we came up with, is to use 8
threaded fio write-mostly workload against one partition, let the
system use the other partition as the read-write filesystem (i.e. just
background activity) and then run the following loop:

while true; do sync; sleep 1 ; done

The hung task stack traces look like the following:

[  128.994891] jbd2/dm-1-8     D    0   367      2 0x00000028
last_sleep: 96340206998.  last_runnable: 96340140151
[  128.994898] Call trace:
[  128.994903]  __switch_to+0x120/0x13c
[  128.994909]  __schedule+0x60c/0x7dc
[  128.994914]  schedule+0x74/0x94
[  128.994919]  io_schedule+0x1c/0x40
[  128.994925]  bit_wait_io+0x18/0x58
[  128.994930]  __wait_on_bit+0x78/0xdc
[  128.994935]  out_of_line_wait_on_bit+0xa0/0xcc
[  128.994943]  __wait_on_buffer+0x48/0x54
[  128.994948]  jbd2_journal_commit_transaction+0x1198/0x1a4c
[  128.994956]  kjournald2+0x19c/0x268
[  128.994961]  kthread+0x120/0x130
[  128.994967]  ret_from_fork+0x10/0x18

I added some more information to trace points to understand what was
going on.  It turns out that blk_mq_sched_dispatch_requests had
checked hctx->dispatch, found it empty, and then began consuming
requests from the io scheduler (in blk_mq_do_dispatch_sched).
Unfortunately, the deluge from the I/O scheduler (BFQ in our case)
doesn't stop for 30 seconds and there is no mechanism present in
blk_mq_do_dispatch_sched to terminate early or reconsider
hctx->dispatch contents.  In the meantime, a flush command arrives in
hctx->dispatch (via insertion in  blk_mq_sched_bypass_insert) and
languishes there.  Eventually the thread waiting on the flush triggers
the hung task watchdog.

The solution that comes to mind is to periodically check
hctx->dispatch in blk_mq_do_dispatch_sched and exit early if it is
non-empty.  However, not being an expert in this subsystem, I am not
sure if there would be other consequences.

Any help is appreciated,

Salman

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-04-23 20:40 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-30 19:34 Hung tasks with multiple partitions Salman Qazi
2020-01-30 20:49 ` Bart Van Assche
2020-01-30 21:02   ` Salman Qazi
     [not found]     ` <20200203204554.119849-1-sqazi@google.com>
2020-02-03 20:59       ` [PATCH] block: Limit number of items taken from the I/O scheduler in one go Salman Qazi
2020-02-04  3:47         ` Bart Van Assche
2020-02-04  9:20         ` Ming Lei
2020-02-04 18:26           ` Salman Qazi
2020-02-04 19:37             ` Salman Qazi
2020-02-05  4:55               ` Ming Lei
2020-02-05 19:57                 ` Salman Qazi
2020-02-06 10:18                   ` Ming Lei
2020-02-06 21:12                     ` Salman Qazi
2020-02-07  2:07                       ` Ming Lei
2020-02-07 15:26                       ` Bart Van Assche
2020-02-07 18:45                         ` Salman Qazi
2020-02-07 19:04                           ` Salman Qazi
2020-02-07 20:19                           ` Bart Van Assche
2020-02-07 20:37                             ` Salman Qazi
2020-04-20 16:42                               ` Doug Anderson
2020-04-23 20:13                                 ` Jesse Barnes
2020-04-23 20:34                                   ` Jens Axboe
2020-04-23 20:40                                     ` Salman Qazi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).