linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Baoquan He <bhe@redhat.com>, Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Subject: Re: [PATCH RFC 1/2] mm/memory_hotplug: no need to init new pgdat with node_start_pfn
Date: Wed, 22 Apr 2020 12:00:59 +0200	[thread overview]
Message-ID: <20200422100059.GD30312@dhcp22.suse.cz> (raw)
In-Reply-To: <47046122-ddf7-7a96-28f6-e8d57b356697@redhat.com>

On Wed 22-04-20 10:32:32, David Hildenbrand wrote:
> On 22.04.20 10:21, Michal Hocko wrote:
> > On Tue 21-04-20 15:06:20, David Hildenbrand wrote:
> >> On 21.04.20 14:52, Michal Hocko wrote:
> >>> On Tue 21-04-20 14:35:12, David Hildenbrand wrote:
> >>>> On 21.04.20 14:30, Michal Hocko wrote:
> >>>>> Sorry for the late reply
> >>>>>
> >>>>> On Thu 16-04-20 12:47:06, David Hildenbrand wrote:
> >>>>>> A hotadded node/pgdat will span no pages at all, until memory is moved to
> >>>>>> the zone/node via move_pfn_range_to_zone() -> resize_pgdat_range - e.g.,
> >>>>>> when onlining memory blocks. We don't have to initialize the
> >>>>>> node_start_pfn to the memory we are adding.
> >>>>>
> >>>>> You are right that the node is empty at this phase but that is already
> >>>>> reflected by zero present pages (hmm, I do not see spanned pages to be
> >>>>> set 0 though). What I am missing here is why this is an improvement. The
> >>>>> new node is already visible here and I do not see why we hide the
> >>>>> information we already know.
> >>>>
> >>>> "information we already know" - no, not before we online the memory.
> >>>
> >>> Is this really the case? All add_memory_resource users operate on a
> >>> physical memory range.
> >>
> >> Having the first add_memory() to magically set node_start_pfn of a hotplugged
> >> node isn't dangerous, I think we agree on that. It's just completely
> >> unnecessary here and at least left me confused why this is needed at all-
> >> because the node start/end pfn is only really touched when
> >> onlining/offlining memory (when resizing the zone and the pgdat).
> > 
> > I do not see any specific problem. It just feels odd to
> > ignore the start pfn when we have that information. I am little bit
> > worried that this might kick back. E.g. say we start using the memmaps
> > from the hotplugged memory then the initial part of the node will never> get online and we would have memmaps outside of the node span. I do not
> 
> That's a general issue, which I pointed out as response to Oscars last
> series. This needs more thought and reworks, especially how
> node_start_pfn/node_spanned_pages are glued to memory onlining/offlining
> today.
> 
> > see an immediate problem except for the feeling this is odd.
> 
> I think it's inconsistent. E.g., start with memory-less/cpu-less node
> and don't online memory from the kernel immediately.
> 
> Hotplug CPU. PGDAT initialized with node_start_pfn=0. Hotplug memory.
> -> node_start_pfn=0 until memory is actually onlined.
> 
> Hotplug memory. PGDAT initialized with node_start_pfn=$VALUE. Hotplug CPU.
> -> node_start_pfn=$VALUE
> 
> Hotplug memory. PGDAT initialized with node_start_pfn=$VALUE. Hotplug
> CPU. Hotunplug memory.
> -> node_start_pfn=$VALUE, although there is no memory anymore.
> 
> Hotplug memory 1. PGDAT initialized with node_start_pfn=$VALUE. Hotplug
> memory 2. Hotunplug memory 2.
> -> node_start_pfn=$VALUE1 instead of $VALUE2.
> 
> 
> Again, because node_start_pfn has absolutely no meaning until memory is
> actually onlined - today.
> 
> > 
> > That being said I will shut up now and leave it alone.
> 
> Is that a nack?

No it's not. Nor I am going to ack this but I will not stand in the
way. I would just urge to have as many assumptions you are making and as
much information in the changelog as possible.

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-04-22 10:01 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-16 10:47 [PATCH RFC 0/2] mm/memory_hotplug: handle memblocks only with CONFIG_ARCH_KEEP_MEMBLOCK David Hildenbrand
2020-04-16 10:47 ` [PATCH RFC 1/2] mm/memory_hotplug: no need to init new pgdat with node_start_pfn David Hildenbrand
2020-04-16 14:11   ` Pankaj Gupta
2020-04-21 12:30   ` Michal Hocko
2020-04-21 12:35     ` David Hildenbrand
2020-04-21 12:52       ` Michal Hocko
2020-04-21 13:06         ` David Hildenbrand
2020-04-22  8:21           ` Michal Hocko
2020-04-22  8:32             ` David Hildenbrand
2020-04-22 10:00               ` Michal Hocko [this message]
2020-04-16 10:47 ` [PATCH RFC 2/2] mm/memory_hotplug: handle memblocks only with CONFIG_ARCH_KEEP_MEMBLOCK David Hildenbrand
2020-04-16 17:09   ` Mike Rapoport
2020-04-21 12:39   ` Michal Hocko
2020-04-21 12:41     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200422100059.GD30312@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=pankaj.gupta.linux@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).