From: Michal Hocko <mhocko@suse.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
Pavel Tatashin <pasha.tatashin@soleen.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
osalvador@suse.de, richard.weiyang@gmail.com, vbabka@suse.cz,
rientjes@google.com
Subject: Re: [PATCH v2] mm/memory_hotplug: drain per-cpu pages again during memory offline
Date: Fri, 4 Sep 2020 14:42:19 +0200 [thread overview]
Message-ID: <20200904124219.GB4610@dhcp22.suse.cz> (raw)
In-Reply-To: <20200903123136.1fa50e773eb58c6200801e65@linux-foundation.org>
On Thu 03-09-20 12:31:36, Andrew Morton wrote:
> On Thu, 3 Sep 2020 19:36:26 +0200 David Hildenbrand <david@redhat.com> wrote:
>
> > (still on vacation, back next week on Tuesday)
> >
> > I didn't look into discussions in v1, but to me this looks like we are
> > trying to hide an actual bug by implementing hacks in the caller
> > (repeated calls to drain_all_pages()). What about alloc_contig_range()
> > users - you get more allocation errors just because PCP code doesn't
> > play along.
> >
> > There *is* strong synchronization with the page allocator - however,
> > there seems to be one corner case race where we allow to allocate pages
> > from isolated pageblocks.
> >
> > I want that fixed instead if possible, otherwise this is just an ugly
> > hack to make the obvious symptoms (offlining looping forever) disappear.
> >
> > If that is not possible easily, I'd much rather want to see all
> > drain_all_pages() calls being moved to the caller and have the expected
> > behavior documented instead of specifying "there is no strong
> > synchronization with the page allocator" - which is wrong in all but PCP
> > cases (and there only in one possible race?).
> >
>
> It's a two-line hack which fixes a bug in -stable kernels, so I'm
> inclined to proceed with it anyway. We can undo it later on as part of
> a better fix, OK?
Agreed. http://lkml.kernel.org/r/20200904070235.GA15277@dhcp22.suse.cz
for reference.
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2020-09-04 12:42 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-03 14:00 [PATCH v2] mm/memory_hotplug: drain per-cpu pages again during memory offline Pavel Tatashin
2020-09-03 17:36 ` David Hildenbrand
2020-09-03 18:07 ` Pavel Tatashin
2020-09-03 19:31 ` Andrew Morton
2020-09-03 19:35 ` David Hildenbrand
2020-09-04 12:42 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200904124219.GB4610@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osalvador@suse.de \
--cc=pasha.tatashin@soleen.com \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).