From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751334AbdAYHJL (ORCPT ); Wed, 25 Jan 2017 02:09:11 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:34788 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751296AbdAYHJJ (ORCPT ); Wed, 25 Jan 2017 02:09:09 -0500 Subject: Re: [RFC PATCH] mm/hotplug: enable memory hotplug for non-lru movable pages To: , References: <1485314714-38251-1-git-send-email-xieyisheng1@huawei.com> CC: , , , , , , , , , , , , , , From: Yisheng Xie Message-ID: <5cfc0c2d-7f45-67c5-53d9-683d5e243f84@huawei.com> Date: Wed, 25 Jan 2017 14:53:44 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.0 MIME-Version: 1.0 In-Reply-To: <1485314714-38251-1-git-send-email-xieyisheng1@huawei.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.29.40] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.58884BA2.0121,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 892b7cc138ea88aa595ca68ecd76ad13 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org hi, sorry to disturb, I will send another version to make a minor change about page_lock checking in scan_movable_pages. On 2017/1/25 11:25, Yisheng Xie wrote: > We had considered all of the non-lru pages as unmovable before > commit bda807d44454 ("mm: migrate: support non-lru movable page > migration"). But now some of non-lru pages like zsmalloc, > virtio-balloon pages also become movable. So we can offline such > blocks by using non-lru page migration. > > This patch straightforwardly add non-lru migration code, which > means adding non-lru related code to the functions which scan > over pfn and collect pages to be migrated and isolate them before > migration. > > Signed-off-by: Yisheng Xie > --- > mm/memory_hotplug.c | 32 +++++++++++++++++++++----------- > mm/page_alloc.c | 8 ++++++-- > 2 files changed, 27 insertions(+), 13 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index e43142c1..fbdbffc 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1510,15 +1510,16 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) > } > > /* > - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages > - * and hugepages). We scan pfn because it's much easier than scanning over > - * linked list. This function returns the pfn of the first found movable > - * page if it's found, otherwise 0. > + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages, > + * non-lru movable pages and hugepages). We scan pfn because it's much > + * easier than scanning over linked list. This function returns the pfn > + * of the first found movable page if it's found, otherwise 0. > */ > static unsigned long scan_movable_pages(unsigned long start, unsigned long end) > { > unsigned long pfn; > struct page *page; > + bool movable; > for (pfn = start; pfn < end; pfn++) { > if (pfn_valid(pfn)) { > page = pfn_to_page(pfn); > @@ -1531,6 +1532,11 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end) > pfn = round_up(pfn + 1, > 1 << compound_order(page)) - 1; > } > + lock_page(page); > + movable = __PageMovable(page); > + unlock_page(page); > + if (movable) > + return pfn; > } > } > return 0; > @@ -1600,21 +1606,25 @@ static struct page *new_node_page(struct page *page, unsigned long private, > if (!get_page_unless_zero(page)) > continue; > /* > - * We can skip free pages. And we can only deal with pages on > - * LRU. > + * We can skip free pages. And we can deal with pages on > + * LRU and non-lru movable pages. > */ > - ret = isolate_lru_page(page); > + if (PageLRU(page)) > + ret = isolate_lru_page(page); > + else > + ret = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); > if (!ret) { /* Success */ > put_page(page); > list_add_tail(&page->lru, &source); > move_pages--; > - inc_node_page_state(page, NR_ISOLATED_ANON + > - page_is_file_cache(page)); > + if (!__PageMovable(page)) > + inc_node_page_state(page, NR_ISOLATED_ANON + > + page_is_file_cache(page)); > > } else { > #ifdef CONFIG_DEBUG_VM > - pr_alert("removing pfn %lx from LRU failed\n", pfn); > - dump_page(page, "failed to remove from LRU"); > + pr_alert("failed to isolate pfn %lx\n", pfn); > + dump_page(page, "isolation failed"); > #endif > put_page(page); > /* Because we don't have big zone->lock. we should > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d604d25..52d3067 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7055,8 +7055,9 @@ void *__init alloc_large_system_hash(const char *tablename, > * If @count is not zero, it is okay to include less @count unmovable pages > * > * PageLRU check without isolation or lru_lock could race so that > - * MIGRATE_MOVABLE block might include unmovable pages. It means you can't > - * expect this function should be exact. > + * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable > + * check without lock_page also may miss some movable non-lru pages at > + * race condition. So you can't expect this function should be exact. > */ > bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > bool skip_hwpoisoned_pages) > @@ -7112,6 +7113,9 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > if (skip_hwpoisoned_pages && PageHWPoison(page)) > continue; > > + if (__PageMovable(page)) > + continue; > + > if (!PageLRU(page)) > found++; > /* >