From: Mike Rapoport <rppt@linux.ibm.com>
To: Oscar Salvador <osalvador@suse.de>
Cc: akpm@linux-foundation.org, mhocko@suse.com,
dan.j.williams@intel.com, pavel.tatashin@microsoft.com,
jglisse@redhat.com, Jonathan.Cameron@huawei.com,
rafael@kernel.org, david@redhat.com, linux-mm@kvack.org,
Oscar Salvador <osalvador@suse.com>
Subject: Re: [PATCH v2 3/5] mm, memory_hotplug: Move zone/pages handling to offline stage
Date: Wed, 28 Nov 2018 09:52:38 +0200 [thread overview]
Message-ID: <20181128075238.GD14414@rapoport-lnx> (raw)
In-Reply-To: <20181127162005.15833-4-osalvador@suse.de>
On Tue, Nov 27, 2018 at 05:20:03PM +0100, Oscar Salvador wrote:
> From: Oscar Salvador <osalvador@suse.com>
>
> The current implementation accesses pages during hot-remove
> stage in order to get the zone linked to this memory-range.
> We use that zone for a) check if the zone is ZONE_DEVICE and
> b) to shrink the zone's spanned pages.
>
> Accessing pages during this stage is problematic, as we might be
> accessing pages that were not initialized if we did not get to
> online the memory before removing it.
>
> The only reason to check for ZONE_DEVICE in __remove_pages
> is to bypass the call to release_mem_region_adjustable(),
> since these regions are removed with devm_release_mem_region.
>
> With patch#2, this is no longer a problem so we can safely
> call release_mem_region_adjustable().
> release_mem_region_adjustable() will spot that the region
> we are trying to remove was acquired by means of
> devm_request_mem_region, and will back off safely.
>
> This allows us to remove all zone-related operations from
> hot-remove stage.
>
> Because of this, zone's spanned pages are shrinked during
> the offlining stage in shrink_zone_pgdat().
> It would have been great to decrease also the spanned page
> for the node there, but we need them in try_offline_node().
> So we still decrease spanned pages for the node in the hot-remove
> stage.
>
> The only particularity is that now
> find_smallest_section_pfn/find_biggest_section_pfn, when called from
> shrink_zone_span, will now check for online sections and not
> valid sections instead.
> To make this work with devm/HMM code, we need to call offline_mem_sections
> and online_mem_sections in that code path when we are adding memory.
>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> ---
> arch/powerpc/mm/mem.c | 11 +----
> arch/sh/mm/init.c | 4 +-
> arch/x86/mm/init_32.c | 3 +-
> arch/x86/mm/init_64.c | 8 +---
> include/linux/memory_hotplug.h | 8 ++--
> kernel/memremap.c | 14 +++++--
> mm/memory_hotplug.c | 95 ++++++++++++++++++++++++------------------
> mm/sparse.c | 4 +-
> 8 files changed, 76 insertions(+), 71 deletions(-)
[ ... ]
> /**
> - * __remove_pages() - remove sections of pages from a zone
> - * @zone: zone from which pages need to be removed
> + * __remove_pages() - remove sections of pages from a nid
> + * @nid: nid from which pages belong to
Nit: the description sounds a bit awkward.
Why not to keep the original one with s/zone/node/?
> * @phys_start_pfn: starting pageframe (must be aligned to start of a section)
> * @nr_pages: number of pages to remove (must be multiple of section size)
> * @altmap: alternative device page map or %NULL if default memmap is used
> @@ -547,35 +566,28 @@ static int __remove_section(struct zone *zone, struct mem_section *ms,
> * sure that pages are marked reserved and zones are adjust properly by
> * calling offline_pages().
> */
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2018-11-28 7:52 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-27 16:20 [PATCH v2 0/5] Do not touch pages in hot-remove path Oscar Salvador
2018-11-27 16:20 ` [PATCH v2 1/5] mm, memory_hotplug: Add nid parameter to arch_remove_memory Oscar Salvador
2018-11-27 16:20 ` [PATCH v2 2/5] kernel, resource: Check for IORESOURCE_SYSRAM in release_mem_region_adjustable Oscar Salvador
2018-11-27 16:20 ` [PATCH v2 3/5] mm, memory_hotplug: Move zone/pages handling to offline stage Oscar Salvador
2018-11-28 7:52 ` Mike Rapoport [this message]
2018-11-28 14:25 ` osalvador
2018-11-28 14:15 ` osalvador
2018-11-27 16:20 ` [PATCH v2 4/5] mm, memory-hotplug: Rework unregister_mem_sect_under_nodes Oscar Salvador
2019-03-24 6:48 ` Anshuman Khandual
2019-03-25 7:40 ` Oscar Salvador
2019-03-25 8:04 ` Michal Hocko
2019-03-25 8:14 ` Oscar Salvador
2018-11-27 16:20 ` [PATCH v2 5/5] mm, memory_hotplug: Refactor shrink_zone/pgdat_span Oscar Salvador
2018-11-28 6:50 ` Michal Hocko
2018-11-28 7:07 ` Oscar Salvador
2018-11-28 10:03 ` David Hildenbrand
2018-11-28 10:14 ` Michal Hocko
2018-11-28 11:00 ` osalvador
2018-11-28 12:31 ` Michal Hocko
2018-11-28 12:51 ` osalvador
2018-11-28 13:08 ` Michal Hocko
2018-11-28 13:18 ` osalvador
2018-11-28 15:50 ` Michal Hocko
2018-11-28 16:02 ` osalvador
2018-11-29 9:29 ` osalvador
2018-11-28 13:09 ` osalvador
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181128075238.GD14414@rapoport-lnx \
--to=rppt@linux.ibm.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=jglisse@redhat.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=osalvador@suse.com \
--cc=osalvador@suse.de \
--cc=pavel.tatashin@microsoft.com \
--cc=rafael@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).