From: Marc Gonzalez <marc.w.gonzalez@free.fr>
To: linux-mm <linux-mm@kvack.org>, linux-block <linux-block@vger.kernel.org>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>,
Christoph Hellwig <hch@infradead.org>,
Jens Axboe <axboe@kernel.dk>,
fsdevel <linux-fsdevel@vger.kernel.org>,
SCSI <linux-scsi@vger.kernel.org>,
Joao Pinto <jpinto@synopsys.com>,
Jeffrey Hugo <jhugo@codeaurora.org>,
Evan Green <evgreen@chromium.org>,
Matthias Kaehlcke <mka@chromium.org>,
Douglas Anderson <dianders@chromium.org>,
Stephen Boyd <swboyd@chromium.org>,
Tomas Winkler <tomas.winkler@intel.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Alim Akhtar <alim.akhtar@samsung.com>,
Avri Altman <avri.altman@wdc.com>,
Bart Van Assche <bart.vanassche@wdc.com>,
Martin Petersen <martin.petersen@oracle.com>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Ming Lei <ming.lei@redhat.com>, Omar Sandoval <osandov@fb.com>,
Roman Gushchin <guro@fb.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>
Subject: Re: dd hangs when reading large partitions
Date: Thu, 7 Feb 2019 11:44:34 +0100 [thread overview]
Message-ID: <d91e8342-4672-d51d-1bde-74e910e5a959@free.fr> (raw)
In-Reply-To: <0cfe1ed2-41e1-66a4-8d98-ebc0d9645d21@free.fr>
+ linux-mm
Summarizing the issue for linux-mm readers:
If I read data from a storage device larger than my system's RAM, the system freezes
once dd has read more data than available RAM.
# dd if=/dev/sde of=/dev/null bs=1M & while true; do echo m > /proc/sysrq-trigger; echo; echo; sleep 1; done
https://pastebin.ubuntu.com/p/HXzdqDZH4W/
A few seconds before the system hangs, Mem-Info shows:
[ 90.986784] Node 0 active_anon:7060kB inactive_anon:13644kB active_file:0kB inactive_file:3797500kB [...]
=> 3797500kB is basically all of RAM.
I tried to locate where "inactive_file" was being increased from, and saw two signatures:
[ 255.606019] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | add_to_page_cache_lru | mpage_readpages | blkdev_readpages | read_pages | __do_page_cache_readahead | ondemand_readahead | page_cache_sync_readahead
[ 255.637238] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | lru_cache_add_active_or_unevictable | __handle_mm_fault | handle_mm_fault | do_page_fault | do_translation_fault | do_mem_abort | el1_da
Are these expected?
NB: the system does not hang if I specify 'iflag=direct' to dd.
According to the RCU watchdog:
[ 108.466240] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[ 108.466420] rcu: 1-...0: (130 ticks this GP) idle=79e/1/0x4000000000000000 softirq=2393/2523 fqs=2626
[ 108.471436] rcu: (detected by 4, t=5252 jiffies, g=133, q=85)
[ 108.480605] Task dump for CPU 1:
[ 108.486483] kworker/1:1H R running task 0 680 2 0x0000002a
[ 108.489977] Workqueue: kblockd blk_mq_run_work_fn
[ 108.496908] Call trace:
[ 108.501513] __switch_to+0x174/0x1e0
[ 108.503757] blk_mq_run_work_fn+0x28/0x40
[ 108.507589] process_one_work+0x208/0x480
[ 108.511486] worker_thread+0x48/0x460
[ 108.515480] kthread+0x124/0x130
[ 108.519123] ret_from_fork+0x10/0x1c
Can anyone shed some light on what's going on?
Regards.
next prev parent reply other threads:[~2019-02-07 10:45 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-18 12:10 dd hangs when reading large partitions Marc Gonzalez
2019-01-18 13:39 ` Ming Lei
2019-01-18 14:54 ` Marc Gonzalez
2019-01-18 15:18 ` jianchao.wang
2019-01-18 17:38 ` Marc Gonzalez
2019-01-18 17:48 ` Jens Axboe
2019-01-18 17:51 ` Bart Van Assche
2019-01-18 19:00 ` Jens Axboe
2019-01-19 9:56 ` Christoph Hellwig
2019-01-19 14:37 ` Jens Axboe
2019-01-19 16:09 ` Bart Van Assche
2019-01-21 8:33 ` Christoph Hellwig
2019-01-19 19:47 ` Marc Gonzalez
2019-01-19 20:45 ` Marc Gonzalez
2019-01-21 8:33 ` Christoph Hellwig
2019-01-21 15:22 ` Marc Gonzalez
2019-01-22 3:12 ` jianchao.wang
2019-01-22 10:59 ` Marc Gonzalez
2019-01-22 12:49 ` Marc Gonzalez
2019-01-22 16:17 ` Marc Gonzalez
2019-01-22 16:22 ` Greg Kroah-Hartman
2019-01-22 19:07 ` Evan Green
2019-01-23 3:10 ` jianchao.wang
2019-02-06 16:16 ` Marc Gonzalez
2019-02-06 17:05 ` Marc Gonzalez
2019-02-07 10:44 ` Marc Gonzalez [this message]
2019-02-07 16:56 ` Marc Gonzalez
2019-02-08 15:33 ` Marc Gonzalez
2019-02-08 15:49 ` Bart Van Assche
2019-02-09 11:57 ` Marc Gonzalez
2019-02-11 16:36 ` Marc Gonzalez
2019-02-11 17:27 ` Marc Gonzalez
2019-02-12 15:26 ` [SOLVED] " Marc Gonzalez
2019-01-18 19:27 ` Douglas Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d91e8342-4672-d51d-1bde-74e910e5a959@free.fr \
--to=marc.w.gonzalez@free.fr \
--cc=adrian.hunter@intel.com \
--cc=akpm@linux-foundation.org \
--cc=alim.akhtar@samsung.com \
--cc=avri.altman@wdc.com \
--cc=axboe@kernel.dk \
--cc=bart.vanassche@wdc.com \
--cc=bjorn.andersson@linaro.org \
--cc=dianders@chromium.org \
--cc=evgreen@chromium.org \
--cc=guro@fb.com \
--cc=hch@infradead.org \
--cc=jhugo@codeaurora.org \
--cc=jianchao.w.wang@oracle.com \
--cc=jpinto@synopsys.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=mhocko@suse.com \
--cc=ming.lei@redhat.com \
--cc=mka@chromium.org \
--cc=osandov@fb.com \
--cc=swboyd@chromium.org \
--cc=tomas.winkler@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).