From: Marc Gonzalez <marc.w.gonzalez@free.fr>
To: Ming Lei <tom.leiming@gmail.com>,
fsdevel <linux-fsdevel@vger.kernel.org>,
linux-block <linux-block@vger.kernel.org>
Cc: SCSI <linux-scsi@vger.kernel.org>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Jan Kara <jack@suse.com>, Christoph Hellwig <hch@infradead.org>,
Joao Pinto <jpinto@synopsys.com>, Jens Axboe <axboe@kernel.dk>,
Fujita Tomonori <fujita.tomonori@lab.ntt.co.jp>,
Paolo Valente <paolo.valente@linaro.org>
Subject: Re: dd hangs when reading large partitions
Date: Fri, 18 Jan 2019 15:54:48 +0100 [thread overview]
Message-ID: <c1ff1267-c559-9de2-efc6-8ec9a8b9074a@free.fr> (raw)
In-Reply-To: <CACVXFVOwihuWJ9p52G_zEE7tw_BgEK7fnw3LgSJuG-FzWGT=ag@mail.gmail.com>
On 18/01/2019 14:39, Ming Lei wrote:
> On Fri, Jan 18, 2019 at 8:11 PM Marc Gonzalez wrote:
>
>> I'm running into an issue which I don't know how to debug.
>> So I'm open to ideas and suggestions :-)
>>
>> On my arm64 board, I have enabled Universal Flash Storage support.
>>
>> I wanted to benchmark read performance, and noticed that the system
>> locks up when I read partitions larger than 3.5 GB, unless I tell
>> dd to use direct IO:
>>
>> *** WITH O_DIRECT ***
>> # dd if=/dev/sda of=/dev/null bs=1M iflag=direct status=progress
>> 57892929536 bytes (58 GB, 54 GiB) copied, 697.006 s, 83.1 MB/s
>> 55256+0 records in
>> 55256+0 records out
>> 57940115456 bytes (58 GB, 54 GiB) copied, 697.575 s, 83.1 MB/s
>>
>> *** WITHOUT O_DIRECT ***
>> # dd if=/dev/sda of=/dev/null bs=1M status=progress
>> 3853516800 bytes (3.9 GB, 3.6 GiB) copied, 49.0002 s, 78.6 MB/s
>>
>>
>> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
>> rcu: 1-...0: (8242 ticks this GP) idle=106/1/0x4000000000000000 softirq=168/171 fqs=2626
>> rcu: 6-...0: (99 GPs behind) idle=ec2/1/0x4000000000000000 softirq=71/71 fqs=2626
>> rcu: (detected by 7, t=5254 jiffies, g=-275, q=2)
>> Task dump for CPU 1:
>> kworker/1:1H R running task 0 675 2 0x0000002a
>> Workqueue: kblockd blk_mq_run_work_fn
>> Call trace:
>> __switch_to+0x168/0x1d0
>> 0xffffffc0f6efbbc8
>> blk_mq_run_work_fn+0x28/0x40
>> process_one_work+0x208/0x470
>> worker_thread+0x48/0x460
>> kthread+0x128/0x130
>> ret_from_fork+0x10/0x1c
>> Task dump for CPU 6:
>> kthreadd R running task 0 2 0 0x0000002a
>> Call trace:
>> __switch_to+0x168/0x1d0
>> 0x5b36396f4e7d4000
>>
>>
>> The system always hangs around the 3.6 GiB mark, wherever I start from.
>> How can I debug this issue?
>
> Maybe you can try to collect debugfs log first via the following command?
>
> (cd /sys/kernel/debug/block/sda && find . -type f -exec grep -aH . {} \;)
# find . -type f -exec grep -aH . {} \;
./hctx0/cpu7/completed:0 0
./hctx0/cpu7/merged:0
./hctx0/cpu7/dispatched:0 0
./hctx0/cpu6/completed:0 5
./hctx0/cpu6/merged:0
./hctx0/cpu6/dispatched:0 5
./hctx0/cpu5/completed:0 0
./hctx0/cpu5/merged:0
./hctx0/cpu5/dispatched:0 0
./hctx0/cpu4/completed:0 0
./hctx0/cpu4/merged:0
./hctx0/cpu4/dispatched:0 0
./hctx0/cpu3/completed:5 38
./hctx0/cpu3/merged:0
./hctx0/cpu3/dispatched:5 38
./hctx0/cpu2/completed:0 0
./hctx0/cpu2/merged:0
./hctx0/cpu2/dispatched:0 0
./hctx0/cpu1/completed:0 0
./hctx0/cpu1/merged:0
./hctx0/cpu1/dispatched:0 0
./hctx0/cpu0/completed:0 0
./hctx0/cpu0/merged:0
./hctx0/cpu0/dispatched:0 0
./hctx0/type:default
./hctx0/dispatch_busy:0
./hctx0/active:0
./hctx0/run:43
./hctx0/queued:48
./hctx0/dispatched: 0 0
./hctx0/dispatched: 1 43
./hctx0/dispatched: 2 0
./hctx0/dispatched: 4 0
./hctx0/dispatched: 8 0
./hctx0/dispatched: 16 0
./hctx0/dispatched: 32+ 0
./hctx0/io_poll:considered=0
./hctx0/io_poll:invoked=0
./hctx0/io_poll:success=0
./hctx0/tags_bitmap:00000000: ff07 070f
./hctx0/tags:nr_tags=32
./hctx0/tags:nr_reserved_tags=0
./hctx0/tags:active_queues=0
./hctx0/tags:bitmap_tags:
./hctx0/tags:depth=32
./hctx0/tags:busy=0
./hctx0/tags:cleared=18
./hctx0/tags:bits_per_word=8
./hctx0/tags:map_nr=4
./hctx0/tags:alloc_hint={12, 7, 12, 24, 12, 27, 10, 18}
./hctx0/tags:wake_batch=4
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags: {.wait_cnt=4, .wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/ctx_map:00000000: 00
./hctx0/flags:alloc_policy=FIFO SHOULD_MERGE|TAG_SHARED|SG_MERGE
./write_hints:hint0: 0
./write_hints:hint1: 0
./write_hints:hint2: 0
./write_hints:hint3: 0
./write_hints:hint4: 0
./state:SAME_COMP|NONROT|IO_STAT|DISCARD|INIT_DONE|WC|FUA|REGISTERED|SCSI_PASSTHROUGH
./pm_only:0
./poll_stat:read (512 Bytes): samples=0
./poll_stat:write (512 Bytes): samples=0
./poll_stat:read (1024 Bytes): samples=0
./poll_stat:write (1024 Bytes): samples=0
./poll_stat:read (2048 Bytes): samples=0
./poll_stat:write (2048 Bytes): samples=0
./poll_stat:read (4096 Bytes): samples=0
./poll_stat:write (4096 Bytes): samples=0
./poll_stat:read (8192 Bytes): samples=0
./poll_stat:write (8192 Bytes): samples=0
./poll_stat:read (16384 Bytes): samples=0
./poll_stat:write (16384 Bytes): samples=0
./poll_stat:read (32768 Bytes): samples=0
./poll_stat:write (32768 Bytes): samples=0
./poll_stat:read (65536 Bytes): samples=0
./poll_stat:write (65536 Bytes): samples=0
> BTW, I have several questions on this report:
>
> - what is the kernel version in your test?
v5.0-rc1 + a few patches to enable UFS
> - can you reproduce this issue on other disk (not UFS)?
At the moment, UFS is the only available storage option.
(USB is broken, SDHC requires some work.)
> - are there any tasks in 'D' state shown via 'ps -ax'? If yes, please dump their
> stack trace.
(This rootfs provides busybox ps.)
# ps -o pid,user,stat,args
PID USER STAT COMMAND
1 root S init
2 root SW [kthreadd]
3 root IW< [rcu_gp]
4 root IW< [rcu_par_gp]
5 root IW [kworker/0:0-eve]
6 root IW< [kworker/0:0H-kb]
8 root IW< [mm_percpu_wq]
9 root SW [ksoftirqd/0]
10 root IW [rcu_preempt]
11 root SW [migration/0]
12 root IW [kworker/0:1-eve]
13 root SW [cpuhp/0]
14 root SW [cpuhp/1]
15 root SW [migration/1]
16 root SW [ksoftirqd/1]
17 root IW [kworker/1:0-eve]
18 root IW< [kworker/1:0H-kb]
19 root SW [cpuhp/2]
20 root SW [migration/2]
21 root SW [ksoftirqd/2]
22 root IW [kworker/2:0-eve]
23 root IW< [kworker/2:0H-kb]
24 root SW [cpuhp/3]
25 root SW [migration/3]
26 root SW [ksoftirqd/3]
27 root IW [kworker/3:0-eve]
28 root IW< [kworker/3:0H-kb]
29 root SW [cpuhp/4]
30 root SW [migration/4]
31 root SW [ksoftirqd/4]
32 root IW [kworker/4:0-eve]
33 root IW< [kworker/4:0H]
34 root SW [cpuhp/5]
35 root SW [migration/5]
36 root SW [ksoftirqd/5]
37 root IW [kworker/5:0-eve]
38 root IW< [kworker/5:0H]
39 root SW [cpuhp/6]
40 root SW [migration/6]
41 root SW [ksoftirqd/6]
42 root IW [kworker/6:0-eve]
43 root IW< [kworker/6:0H-kb]
44 root SW [cpuhp/7]
45 root SW [migration/7]
46 root SW [ksoftirqd/7]
47 root IW [kworker/7:0-eve]
48 root IW< [kworker/7:0H]
49 root SW [kdevtmpfs]
52 root SW [rcu_tasks_kthre]
53 root IW [kworker/3:1-mm_]
64 root IW [kworker/4:1-mm_]
65 root IW [kworker/5:1-eve]
76 root IW [kworker/6:1-mm_]
78 root IW [kworker/1:1-mm_]
88 root IW [kworker/7:1-eve]
101 root IW [kworker/2:1-mm_]
198 root SW [khungtaskd]
199 root SW [oom_reaper]
200 root IW< [writeback]
202 root SW [kcompactd0]
204 root IW< [kblockd]
223 root IW< [devfreq_wq]
372 root SW [kswapd0]
505 root SW [scsi_eh_0]
506 root IW< [scsi_tmf_0]
507 root IW< [ufs_clk_gating_]
508 root IW< [ufs_clkscaling_]
546 root IW [kworker/u16:5-e]
555 root IW [kworker/u16:7-e]
657 root S /sbin/klogd -n
675 root S -sh
677 root IW< [kworker/6:1H-kb]
678 root IW< [kworker/3:1H]
679 root IW< [kworker/1:1H]
680 root IW< [kworker/2:1H]
681 root IW< [kworker/0:1H]
859 root R ps -o pid,user,stat,args
Do you see anything relevant?
Regards.
next prev parent reply other threads:[~2019-01-18 14:55 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-18 12:10 dd hangs when reading large partitions Marc Gonzalez
2019-01-18 13:39 ` Ming Lei
2019-01-18 14:54 ` Marc Gonzalez [this message]
2019-01-18 15:18 ` jianchao.wang
2019-01-18 17:38 ` Marc Gonzalez
2019-01-18 17:48 ` Jens Axboe
2019-01-18 17:51 ` Bart Van Assche
2019-01-18 19:00 ` Jens Axboe
2019-01-19 9:56 ` Christoph Hellwig
2019-01-19 14:37 ` Jens Axboe
2019-01-19 16:09 ` Bart Van Assche
2019-01-21 8:33 ` Christoph Hellwig
2019-01-19 19:47 ` Marc Gonzalez
2019-01-19 20:45 ` Marc Gonzalez
2019-01-21 8:33 ` Christoph Hellwig
2019-01-21 15:22 ` Marc Gonzalez
2019-01-22 3:12 ` jianchao.wang
2019-01-22 10:59 ` Marc Gonzalez
2019-01-22 12:49 ` Marc Gonzalez
2019-01-22 16:17 ` Marc Gonzalez
2019-01-22 16:22 ` Greg Kroah-Hartman
2019-01-22 19:07 ` Evan Green
2019-01-23 3:10 ` jianchao.wang
2019-02-06 16:16 ` Marc Gonzalez
2019-02-06 17:05 ` Marc Gonzalez
2019-02-07 10:44 ` Marc Gonzalez
2019-02-07 16:56 ` Marc Gonzalez
2019-02-08 15:33 ` Marc Gonzalez
2019-02-08 15:49 ` Bart Van Assche
2019-02-09 11:57 ` Marc Gonzalez
2019-02-11 16:36 ` Marc Gonzalez
2019-02-11 17:27 ` Marc Gonzalez
2019-02-12 15:26 ` [SOLVED] " Marc Gonzalez
2019-01-18 19:27 ` Douglas Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c1ff1267-c559-9de2-efc6-8ec9a8b9074a@free.fr \
--to=marc.w.gonzalez@free.fr \
--cc=axboe@kernel.dk \
--cc=fujita.tomonori@lab.ntt.co.jp \
--cc=hch@infradead.org \
--cc=jack@suse.com \
--cc=jpinto@synopsys.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=paolo.valente@linaro.org \
--cc=tom.leiming@gmail.com \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).