From: Michal Hocko <mhocko@kernel.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Michal Hocko <mhocko@suse.com>, David Hildenbrand <david@redhat.com>, Oscar Salvador <osalvador@suse.de>, Pavel Tatashin <pasha.tatashin@soleen.com> Subject: [PATCH 1/3] mm, memory_hotplug: try to migrate full pfn range Date: Tue, 11 Dec 2018 15:27:39 +0100 [thread overview] Message-ID: <20181211142741.2607-2-mhocko@kernel.org> (raw) In-Reply-To: <20181211142741.2607-1-mhocko@kernel.org> From: Michal Hocko <mhocko@suse.com> do_migrate_range has been limiting the number of pages to migrate to 256 for some reason which is not documented. Even if the limit made some sense back then when it was introduced it doesn't really serve a good purpose these days. If the range contains huge pages then we break out of the loop too early and go through LRU and pcp caches draining and scan_movable_pages is quite suboptimal. The only reason to limit the number of pages I can think of is to reduce the potential time to react on the fatal signal. But even then the number of pages is a questionable metric because even a single page might migration block in a non-killable state (e.g. __unmap_and_move). Remove the limit and offline the full requested range (this is one membblock worth of pages with the current code). Should we ever get a report that offlining takes too long to react on fatal signal then we should rather fix the core migration to use killable waits and bailout on a signal. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Signed-off-by: Michal Hocko <mhocko@suse.com> --- mm/memory_hotplug.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c82193db4be6..6263c8cd4491 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1339,18 +1339,16 @@ static struct page *new_node_page(struct page *page, unsigned long private) return new_page_nodemask(page, nid, &nmask); } -#define NR_OFFLINE_AT_ONCE_PAGES (256) static int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) { unsigned long pfn; struct page *page; - int move_pages = NR_OFFLINE_AT_ONCE_PAGES; int not_managed = 0; int ret = 0; LIST_HEAD(source); - for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) { + for (pfn = start_pfn; pfn < end_pfn; pfn++) { if (!pfn_valid(pfn)) continue; page = pfn_to_page(pfn); @@ -1362,8 +1360,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) ret = -EBUSY; break; } - if (isolate_huge_page(page, &source)) - move_pages -= 1 << compound_order(head); + isolate_huge_page(page, &source); continue; } else if (PageTransHuge(page)) pfn = page_to_pfn(compound_head(page)) @@ -1382,7 +1379,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (!ret) { /* Success */ put_page(page); list_add_tail(&page->lru, &source); - move_pages--; if (!__PageMovable(page)) inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); -- 2.19.2
WARNING: multiple messages have this Message-ID
From: Michal Hocko <mhocko@kernel.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>, Michal Hocko <mhocko@suse.com>, David Hildenbrand <david@redhat.com>, Oscar Salvador <osalvador@suse.de>, Pavel Tatashin <pasha.tatashin@soleen.com> Subject: [PATCH 1/3] mm, memory_hotplug: try to migrate full pfn range Date: Tue, 11 Dec 2018 15:27:39 +0100 [thread overview] Message-ID: <20181211142741.2607-2-mhocko@kernel.org> (raw) In-Reply-To: <20181211142741.2607-1-mhocko@kernel.org> From: Michal Hocko <mhocko@suse.com> do_migrate_range has been limiting the number of pages to migrate to 256 for some reason which is not documented. Even if the limit made some sense back then when it was introduced it doesn't really serve a good purpose these days. If the range contains huge pages then we break out of the loop too early and go through LRU and pcp caches draining and scan_movable_pages is quite suboptimal. The only reason to limit the number of pages I can think of is to reduce the potential time to react on the fatal signal. But even then the number of pages is a questionable metric because even a single page might migration block in a non-killable state (e.g. __unmap_and_move). Remove the limit and offline the full requested range (this is one membblock worth of pages with the current code). Should we ever get a report that offlining takes too long to react on fatal signal then we should rather fix the core migration to use killable waits and bailout on a signal. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Signed-off-by: Michal Hocko <mhocko@suse.com> --- mm/memory_hotplug.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c82193db4be6..6263c8cd4491 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1339,18 +1339,16 @@ static struct page *new_node_page(struct page *page, unsigned long private) return new_page_nodemask(page, nid, &nmask); } -#define NR_OFFLINE_AT_ONCE_PAGES (256) static int do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) { unsigned long pfn; struct page *page; - int move_pages = NR_OFFLINE_AT_ONCE_PAGES; int not_managed = 0; int ret = 0; LIST_HEAD(source); - for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) { + for (pfn = start_pfn; pfn < end_pfn; pfn++) { if (!pfn_valid(pfn)) continue; page = pfn_to_page(pfn); @@ -1362,8 +1360,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) ret = -EBUSY; break; } - if (isolate_huge_page(page, &source)) - move_pages -= 1 << compound_order(head); + isolate_huge_page(page, &source); continue; } else if (PageTransHuge(page)) pfn = page_to_pfn(compound_head(page)) @@ -1382,7 +1379,6 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (!ret) { /* Success */ put_page(page); list_add_tail(&page->lru, &source); - move_pages--; if (!__PageMovable(page)) inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); -- 2.19.2
next prev parent reply other threads:[~2018-12-11 14:28 UTC|newest] Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-12-11 14:27 [PATCH 0/3] few memory offlining enhancements Michal Hocko 2018-12-11 14:27 ` Michal Hocko 2018-12-11 14:27 ` Michal Hocko [this message] 2018-12-11 14:27 ` [PATCH 1/3] mm, memory_hotplug: try to migrate full pfn range Michal Hocko 2018-12-12 1:12 ` Wei Yang 2018-12-11 14:27 ` [PATCH 2/3] mm, memory_hotplug: deobfuscate migration part of offlining Michal Hocko 2018-12-11 14:27 ` Michal Hocko 2018-12-11 14:27 ` [PATCH 3/3] mm, fault_around: do not take a reference to a locked page Michal Hocko 2018-12-11 14:27 ` Michal Hocko
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20181211142741.2607-2-mhocko@kernel.org \ --to=mhocko@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=david@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.com \ --cc=osalvador@suse.de \ --cc=pasha.tatashin@soleen.com \ --subject='Re: [PATCH 1/3] mm, memory_hotplug: try to migrate full pfn range' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.