From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCA1BC433E1 for ; Mon, 22 Jun 2020 09:22:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A571D20708 for ; Mon, 22 Jun 2020 09:22:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727021AbgFVJW0 (ORCPT ); Mon, 22 Jun 2020 05:22:26 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:59222 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726608AbgFVJWZ (ORCPT ); Mon, 22 Jun 2020 05:22:25 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04397;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0U0KYLA._1592817741; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U0KYLA._1592817741) by smtp.aliyun-inc.com(127.0.0.1); Mon, 22 Jun 2020 17:22:22 +0800 Date: Mon, 22 Jun 2020 17:22:21 +0800 From: Wei Yang To: David Hildenbrand Cc: Wei Yang , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michal Hocko , stable@vger.kernel.org, Andrew Morton , Johannes Weiner , Minchan Kim , Huang Ying , Wei Yang , Mel Gorman Subject: Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps Message-ID: <20200622092221.GA96699@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20200619125923.22602-1-david@redhat.com> <20200619125923.22602-2-david@redhat.com> <20200622082635.GA93552@L-31X9LVDL-1304.local> <2185539f-b210-5d3f-5da2-a497b354eebb@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2185539f-b210-5d3f-5da2-a497b354eebb@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote: >On 22.06.20 10:26, Wei Yang wrote: >> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote: >>> Especially with memory hotplug, we can have offline sections (with a >>> garbage memmap) and overlapping zones. We have to make sure to only >>> touch initialized memmaps (online sections managed by the buddy) and that >>> the zone matches, to not move pages between zones. >>> >>> To test if this can actually happen, I added a simple >>> BUG_ON(page_zone(page_i) != page_zone(page_j)); >>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and >>> onlining the first memory block "online_movable" and the second memory >>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL >>> and MOVABLE) overlap. >>> >>> This might result in all kinds of weird situations (e.g., double >>> allocations, list corruptions, unmovable allocations ending up in the >>> movable zone). >>> >>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization") >>> Acked-by: Michal Hocko >>> Cc: stable@vger.kernel.org # v5.2+ >>> Cc: Andrew Morton >>> Cc: Johannes Weiner >>> Cc: Michal Hocko >>> Cc: Minchan Kim >>> Cc: Huang Ying >>> Cc: Wei Yang >>> Cc: Mel Gorman >>> Signed-off-by: David Hildenbrand >>> --- >>> mm/shuffle.c | 18 +++++++++--------- >>> 1 file changed, 9 insertions(+), 9 deletions(-) >>> >>> diff --git a/mm/shuffle.c b/mm/shuffle.c >>> index 44406d9977c77..dd13ab851b3ee 100644 >>> --- a/mm/shuffle.c >>> +++ b/mm/shuffle.c >>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400); >>> * For two pages to be swapped in the shuffle, they must be free (on a >>> * 'free_area' lru), have the same order, and have the same migratetype. >>> */ >>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order) >>> +static struct page * __meminit shuffle_valid_page(struct zone *zone, >>> + unsigned long pfn, int order) >>> { >>> - struct page *page; >>> + struct page *page = pfn_to_online_page(pfn); >> >> Hi, David and Dan, >> >> One thing I want to confirm here is we won't have partially online section, >> right? We can add a sub-section to system, but we won't manage it by buddy. > >Hi, > >there is still a BUG with sub-section hot-add (devmem), which broke >pfn_to_online_page() in corner cases (especially, see the description in >include/linux/mmzone.h). We can have a boot-memory section partially >populated and marked online. Then, we can hot-add devmem, marking the >remaining pfns valid - and as the section is maked online, also as online. Oh, yes, I see this description. This means we could have section marked as online, but with a sub-section even not added. While the good news is even the sub-section is not added, but its memmap is populated for an early section. So the page returned from pfn_to_online_page() is a valid one. But what would happen, if the sub-section is removed after added? Would section_deactivate() release related memmap to this "struct page"? > >This is, however, a different problem to solve and affects most other >pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us >from most harm, as the devmem zone won't match. > Yes, a different problem, just jump into my mind. Hope this won't affect this patch. >Thanks! > >-- >Thanks, > >David / dhildenb -- Wei Yang Help you, Help me