LKML Archive on lore.kernel.org
 help / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Brad Bolen <bradleybolen@gmail.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: kernel panic on null pointer on page->mem_cgroup
Date: Tue, 8 Aug 2017 16:08:49 -0400
Message-ID: <20170808200849.GA1104@cmpxchg.org> (raw)
In-Reply-To: <CADvgSZSn1v-tTpa07ebqr19heQbkzbavdPM_nbRNR1WF-EBnFw@mail.gmail.com>

On Tue, Aug 08, 2017 at 03:13:42PM -0400, Brad Bolen wrote:
> On Tue, Aug 8, 2017 at 1:37 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> > On Tue, Aug 08, 2017 at 09:56:01AM -0700, Jaegeuk Kim wrote:
> >> On 08/08, Johannes Weiner wrote:
> >> > Hi Jaegeuk and Bradley,
> >> >
> >> > On Mon, Aug 07, 2017 at 09:01:50PM -0400, Bradley Bolen wrote:
> >> > > I am getting a very similar error on v4.11 with an arm64 board.
> >> > >
> >> > > I, too, also see page->mem_cgroup checked to make sure that it is not
> >> > > NULL and then several instructions later it is NULL.  It does appear
> >> > > that someone is changing that member without taking the lock.  In my
> >> > > setup, I see
> >> > >
> >> > > crash> bt
> >> > > PID: 72     TASK: e1f48640  CPU: 0   COMMAND: "mmcqd/1"
> >> > >  #0 [<c00ad35c>] (__crash_kexec) from [<c0101080>]
> >> > >  #1 [<c0101080>] (panic) from [<c028cd6c>]
> >> > >  #2 [<c028cd6c>] (svcerr_panic) from [<c028cdc4>]
> >> > >  #3 [<c028cdc4>] (_SvcErr_) from [<c001474c>]
> >> > >  #4 [<c001474c>] (die) from [<c00241f8>]
> >> > >  #5 [<c00241f8>] (__do_kernel_fault) from [<c0560600>]
> >> > >  #6 [<c0560600>] (do_page_fault) from [<c00092e8>]
> >> > >  #7 [<c00092e8>] (do_DataAbort) from [<c055f9f0>]
> >> > >     pc : [<c0112540>]    lr : [<c0112518>]    psr: a0000193
> >> > >     sp : c1a19cc8  ip : 00000000  fp : c1a19d04
> >> > >     r10: 0006ae29  r9 : 00000000  r8 : dfbf1800
> >> > >     r7 : dfbf1800  r6 : 00000001  r5 : f3c1107c  r4 : e2fb6424
> >> > >     r3 : 00000000  r2 : 00040228  r1 : 221e3000  r0 : a0000113
> >> > >     Flags: NzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM
> >> > >  #8 [<c055f9f0>] (__dabt_svc) from [<c0112518>]
> >> > >  #9 [<c0112540>] (test_clear_page_writeback) from [<c01046d4>]
> >> > > #10 [<c01046d4>] (end_page_writeback) from [<c0149bcc>]
> >> > > #11 [<c0149bcc>] (end_swap_bio_write) from [<c0261460>]
> >> > > #12 [<c0261460>] (bio_endio) from [<c042c800>]
> >> > > #13 [<c042c800>] (dec_pending) from [<c042e648>]
> >> > > #14 [<c042e648>] (clone_endio) from [<c0261460>]
> >> > > #15 [<c0261460>] (bio_endio) from [<bf60aa00>]
> >> > > #16 [<bf60aa00>] (crypt_dec_pending [dm_crypt]) from [<bf60c1e8>]
> >> > > #17 [<bf60c1e8>] (crypt_endio [dm_crypt]) from [<c0261460>]
> >> > > #18 [<c0261460>] (bio_endio) from [<c0269e34>]
> >> > > #19 [<c0269e34>] (blk_update_request) from [<c026a058>]
> >> > > #20 [<c026a058>] (blk_update_bidi_request) from [<c026a444>]
> >> > > #21 [<c026a444>] (blk_end_bidi_request) from [<c026a494>]
> >> > > #22 [<c026a494>] (blk_end_request) from [<c0458dbc>]
> >> > > #23 [<c0458dbc>] (mmc_blk_issue_rw_rq) from [<c0459e24>]
> >> > > #24 [<c0459e24>] (mmc_blk_issue_rq) from [<c045a018>]
> >> > > #25 [<c045a018>] (mmc_queue_thread) from [<c0048890>]
> >> > > #26 [<c0048890>] (kthread) from [<c0010388>]
> >> > > crash> sym c0112540
> >> > > c0112540 (T) test_clear_page_writeback+512
> >> > >  /kernel-source/include/linux/memcontrol.h: 518
> >> > >
> >> > > crash> bt 35
> >> > > PID: 35     TASK: e1d45dc0  CPU: 1   COMMAND: "kswapd0"
> >> > >  #0 [<c0559ab8>] (__schedule) from [<c0559edc>]
> >> > >  #1 [<c0559edc>] (schedule) from [<c055e54c>]
> >> > >  #2 [<c055e54c>] (schedule_timeout) from [<c055a3a4>]
> >> > >  #3 [<c055a3a4>] (io_schedule_timeout) from [<c0106cb0>]
> >> > >  #4 [<c0106cb0>] (mempool_alloc) from [<c0261668>]
> >> > >  #5 [<c0261668>] (bio_alloc_bioset) from [<c0149d68>]
> >> > >  #6 [<c0149d68>] (get_swap_bio) from [<c014a280>]
> >> > >  #7 [<c014a280>] (__swap_writepage) from [<c014a3bc>]
> >> > >  #8 [<c014a3bc>] (swap_writepage) from [<c011e5c8>]
> >> > >  #9 [<c011e5c8>] (shmem_writepage) from [<c011a9b8>]
> >> > > #10 [<c011a9b8>] (shrink_page_list) from [<c011b528>]
> >> > > #11 [<c011b528>] (shrink_inactive_list) from [<c011c160>]
> >> > > #12 [<c011c160>] (shrink_node_memcg) from [<c011c400>]
> >> > > #13 [<c011c400>] (shrink_node) from [<c011d7dc>]
> >> > > #14 [<c011d7dc>] (kswapd) from [<c0048890>]
> >> > > #15 [<c0048890>] (kthread) from [<c0010388>]
> >> > >
> >> > > It appears that uncharge_list() in mm/memcontrol.c is not taking the
> >> > > page lock when it sets mem_cgroup to NULL.  I am not familiar with the
> >> > > mm code so I do not know if this is on purpose or not.  There is a
> >> > > comment in uncharge_list that makes me believe that the crashing code
> >> > > should not have been running:
> >> > > /*
> >> > >  * Nobody should be changing or seriously looking at
> >> > >  * page->mem_cgroup at this point, we have fully
> >> > >  * exclusive access to the page.
> >> > >  */
> >> > > However, I am new to looking at this area of the kernel so I am not
> >> > > sure.
> >> >
> >> > The lock is for pages that are actively being used, whereas the free
> >> > path requires the page refcount to be 0; nobody else should be having
> >> > access to the page at that time.
> >>
> >> Given various trials for a while, using __mod_memcg_state() instead of
> >> mod_memcg_state() ssems somehow blowing the panic away. It might be caused
> >> by kernel preemption?
> >
> > That's puzzling. Is that reliably the case? Because on x86-64,
> > __this_cpu_add and this_cpu_add should result in the same code:
> >
> > #define raw_cpu_add_8(pcp, val)                 percpu_add_op((pcp), val)
> > #define this_cpu_add_8(pcp, val)                percpu_add_op((pcp), val)
> >
> > which boils down to single instructions - incq, decq, addq - and so
> > never needs explicit disabling of scheduler or interrupt preemption.
> >
> >> > > I was able to create a reproducible scenario by using a udelay to
> >> > > increase the time between the if (page->mem_cgroup) check and the later
> >> > > dereference of it to increase the race window.  I then mounted an empty
> >> > > ext4 partition and ran the following no more than twice before it
> >> > > crashed.
> >> > > dd if=/dev/zero of=/tmp/ext4disk/test bs=1M count=100
> >> >
> >> > Thanks, that's useful. I'm going to try to reproduce this also.
> >> >
> >> > There is a
> >> >
> >> >     VM_BUG_ON_PAGE(!PageHWPoison(page) && page_count(page), page);
> >> >
> >> > inside uncharge_list() that verifies that there shouldn't in fact be
> >> > any pages ending writeback when they get into that function. Can you
> >> > build your kernel with CONFIG_DEBUG_VM to enable that test?
> >>
> >> I'll test this as well. ;)
> >
> > Thanks. I'm trying various udelays in between the NULL-check and the
> > dereference, but I cannot seem to make it happen with the ext4-dd on
> > my local machine.
> I am using a fairly fat udelay of 100.
> 
> I turned on CONFIG_DEBUG_VM but it didn't bug for me, although the original
> problem still reproduced.

Ok that sounds like a page refcounting bug then.

> I can insert some tracing to try to get some more information, but unfortunately
> I am stuck with v4.11 at the moment and can't try v4.13-rc4 on this setup.

Does the below trigger with your reproducer? (Still doesn't for me).

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3914e3dd6168..c052b085c5dd 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -599,6 +599,7 @@ static inline void __mod_lruvec_page_state(struct page *page,
 	struct mem_cgroup_per_node *pn;
 
 	__mod_node_page_state(page_pgdat(page), idx, val);
+	VM_BUG_ON_PAGE(!page_count(page), page);
 	if (mem_cgroup_disabled() || !page->mem_cgroup)
 		return;
 	__mod_memcg_state(page->mem_cgroup, idx, val);
@@ -612,6 +613,7 @@ static inline void mod_lruvec_page_state(struct page *page,
 	struct mem_cgroup_per_node *pn;
 
 	mod_node_page_state(page_pgdat(page), idx, val);
+	VM_BUG_ON_PAGE(!page_count(page), page);
 	if (mem_cgroup_disabled() || !page->mem_cgroup)
 		return;
 	mod_memcg_state(page->mem_cgroup, idx, val);

  reply index

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-05 15:52 Jaegeuk Kim
     [not found] ` <20170808010150.4155-1-bradleybolen@gmail.com>
2017-08-08 16:21   ` Johannes Weiner
2017-08-08 16:56     ` Jaegeuk Kim
2017-08-08 17:37       ` Johannes Weiner
2017-08-08 19:13         ` Brad Bolen
2017-08-08 20:08           ` Johannes Weiner [this message]
2017-08-09  1:44             ` Jaegeuk Kim
2017-08-09  2:39               ` Brad Bolen
2017-08-09 18:38                 ` Johannes Weiner
2017-08-10 11:56                   ` Michal Hocko
2017-08-21 13:02                     ` Johannes Weiner
2017-08-21 13:23                       ` Michal Hocko
2017-08-11  1:30                   ` Brad Bolen
2017-08-11 21:37                   ` Jaegeuk Kim

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170808200849.GA1104@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=bradleybolen@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=jaegeuk@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox