All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
To: Xiongfeng Wang <wangxiongfeng2@huawei.com>, akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	frederic@kernel.org, tglx@linutronix.de, mtosatti@redhat.com,
	mgorman@suse.de, linux-rt-users@vger.kernel.org, vbabka@suse.cz,
	cl@linux.com, paulmck@kernel.org, willy@infradead.org
Subject: Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support
Date: Wed, 09 Feb 2022 12:36:02 +0100	[thread overview]
Message-ID: <035d9c7a21eb024e336dce0942fa3f85b864aaea.camel@redhat.com> (raw)
In-Reply-To: <b2e7ea31-0a56-6415-474b-a952fb1d36ef@huawei.com>

On Wed, 2022-02-09 at 19:26 +0800, Xiongfeng Wang wrote:
> Hi,
> 
> On 2022/2/9 17:45, Nicolas Saenz Julienne wrote:
> > Hi Xiongfeng, thanks for taking the time to look at this.
> > 
> > On Wed, 2022-02-09 at 16:55 +0800, Xiongfeng Wang wrote:
> > > Hi Nicolas,
> > > 
> > > When I applied the patchset on the following commit and tested on QEMU, I came
> > > accross the following CallTrace.
> > >   commit dd81e1c7d5fb126e5fbc5c9e334d7b3ec29a16a0
> > > 
> > > I wrote a userspace application to consume the memory. When the memory is used
> > > out, the OOM killer is triggered and the following Calltrace is printed. I am
> > > not sure if it is related to this patchset. But when I reverted this patchset,
> > > the 'NULL pointer' Calltrace didn't show.
> > 
> > It's a silly mistake on my part, while cleaning up the code I messed up one of
> > the 'struct per_cpu_pages' accessors. This should fix it:
> > 
> > ------------------------->8-------------------------
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 0caa7155ca34..e65b991c3dc8 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3279,7 +3279,7 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
> >                                 has_pcps = true;
> >                 } else {
> >                         for_each_populated_zone(z) {
> > -                               pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> > +                               pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
> >                                 lp = rcu_dereference_protected(pcp->lp,
> >                                                 mutex_is_locked(&pcpu_drain_mutex));
> >                                 if (lp->count) {
> 
> I have tested it. It works well. No more 'NULL pointer' Calltrace.

Thanks!

-- 
Nicolás Sáenz


  reply	other threads:[~2022-02-09 12:03 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-08 10:07 [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support Nicolas Saenz Julienne
2022-02-08 10:07 ` [PATCH 1/2] mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly Nicolas Saenz Julienne
2022-03-03 14:33   ` Marcelo Tosatti
2022-02-08 10:07 ` [PATCH 2/2] mm/page_alloc: Add remote draining support to per-cpu lists Nicolas Saenz Julienne
2022-02-08 15:47   ` Marcelo Tosatti
2022-02-15  8:47     ` Nicolas Saenz Julienne
2022-02-15 17:32       ` Paul E. McKenney
2022-02-09  8:55 ` [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support Xiongfeng Wang
2022-02-09  9:45   ` Nicolas Saenz Julienne
2022-02-09 11:26     ` Xiongfeng Wang
2022-02-09 11:36       ` Nicolas Saenz Julienne [this message]
2022-02-10 10:59 ` Xiongfeng Wang
2022-02-10 11:04   ` Nicolas Saenz Julienne
2022-03-03 11:45 ` Mel Gorman
2022-03-07 13:57   ` Nicolas Saenz Julienne
2022-03-10 16:31     ` Mel Gorman
2022-03-07 20:47   ` Marcelo Tosatti
2022-03-24 18:59   ` Nicolas Saenz Julienne
2022-03-25 10:48     ` Mel Gorman
2022-03-28 13:51       ` Nicolas Saenz Julienne
2022-03-29  9:45         ` Mel Gorman
2022-03-30 11:29   ` Nicolas Saenz Julienne
2022-03-31 15:24     ` Mel Gorman
2022-03-03 13:27 ` Vlastimil Babka
2022-03-03 14:10   ` Nicolas Saenz Julienne

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=035d9c7a21eb024e336dce0942fa3f85b864aaea.camel@redhat.com \
    --to=nsaenzju@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mtosatti@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    --cc=wangxiongfeng2@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.