From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82508C56202 for ; Thu, 26 Nov 2020 09:36:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9B0CE20DD4 for ; Thu, 26 Nov 2020 09:36:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="pqvQlBc0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B0CE20DD4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9082B6B006E; Thu, 26 Nov 2020 04:36:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B80B6B0070; Thu, 26 Nov 2020 04:36:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A6456B0071; Thu, 26 Nov 2020 04:36:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 61A4F6B006E for ; Thu, 26 Nov 2020 04:36:14 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 27694180AD815 for ; Thu, 26 Nov 2020 09:36:14 +0000 (UTC) X-FDA: 77526063468.08.sort48_100c2652737e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 0DCAC1819E769 for ; Thu, 26 Nov 2020 09:36:14 +0000 (UTC) X-HE-Tag: sort48_100c2652737e X-Filterd-Recvd-Size: 9827 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Nov 2020 09:36:12 +0000 (UTC) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AQ9VcSk046886; Thu, 26 Nov 2020 04:36:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=BHOW0tulpmQeXyFvwbUO41NDo+7nvJxRlbNmyPuLc/o=; b=pqvQlBc0VjV9vZp6DbXGQRkCFCj3NkrfY5tsQlNLjR4JiBR3vg6IuFw2repZ/cvq5O4b pTUNyZhhncz0e4a3OZL5H3oNwD39KQ19fQvIiaFCK0ibpzdHRcr8LPDB8FzsDx81Wxbg GzQ5SNWkDmwYI3aoirbExL9K+ellWzcsIBJiBmdydypXQxRdqWSAvYaVMwtBOBWHDr7a E/LpU0WmfS0MQ17JvbbxB5rULKIplVJXyZVLsufI45rPAKoknSAui205xZ+0f98iIW3N X1SBffFlWFcpiHBrjTw6d5C7Wve2GY82XAPWbIzISouY+VuHgwJk5LVkXWeqdMlBwflN MA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3526npd9je-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 04:36:10 -0500 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0AQ9WjTB050156; Thu, 26 Nov 2020 04:36:09 -0500 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 3526npd9hs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 04:36:09 -0500 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0AQ9S2ln026666; Thu, 26 Nov 2020 09:36:07 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma04ams.nl.ibm.com with ESMTP id 3518j8hh1s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 09:36:07 +0000 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0AQ9a5AR48038218 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 26 Nov 2020 09:36:05 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8DF3E11C05B; Thu, 26 Nov 2020 09:36:05 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 95CA111C050; Thu, 26 Nov 2020 09:36:04 +0000 (GMT) Received: from linux.ibm.com (unknown [9.145.183.229]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Thu, 26 Nov 2020 09:36:04 +0000 (GMT) Date: Thu, 26 Nov 2020 11:36:02 +0200 From: Mike Rapoport To: Andrea Arcangeli Cc: David Hildenbrand , Vlastimil Babka , Mel Gorman , Andrew Morton , linux-mm@kvack.org, Qian Cai , Michal Hocko , linux-kernel@vger.kernel.org, Baoquan He Subject: Re: [PATCH 1/1] mm: compaction: avoid fast_isolate_around() to set pageblock_skip on reserved pages Message-ID: <20201126093602.GQ123287@linux.ibm.com> References: <35F8AADA-6CAA-4BD6-A4CF-6F29B3F402A4@redhat.com> <20201125210414.GO123287@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737 definitions=2020-11-26_03:2020-11-26,2020-11-26 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 malwarescore=0 suspectscore=0 lowpriorityscore=0 adultscore=0 spamscore=0 phishscore=0 priorityscore=1501 mlxscore=0 bulkscore=0 clxscore=1015 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011260055 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 25, 2020 at 04:38:16PM -0500, Andrea Arcangeli wrote: > On Wed, Nov 25, 2020 at 11:04:14PM +0200, Mike Rapoport wrote: > > I think the very root cause is how e820__memblock_setup() registers > > memory with memblock: > > > > if (entry->type == E820_TYPE_SOFT_RESERVED) > > memblock_reserve(entry->addr, entry->size); > > > > if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) > > continue; > > > > memblock_add(entry->addr, entry->size); > > > > From that point the system has inconsistent view of RAM in both > > memblock.memory and memblock.reserved and, which is then translated to > > memmap etc. > > > > Unfortunately, simply adding all RAM to memblock is not possible as > > there are systems that for them "the addresses listed in the reserved > > range must never be accessed, or (as we discovered) even be reachable by > > an active page table entry" [1]. > > > > [1] https://lore.kernel.org/lkml/20200528151510.GA6154@raspberrypi/ > > It looks like what's missing is a blockmem_reserve which I don't think > would interfere at all with the issue above since it won't create > direct mapping and it'll simply invoke the second stage that wasn't > invoked here. > > I guess this would have a better chance to have the second > initialization stage run in reserve_bootmem_region and it would likely > solve the problem without breaking E820_TYPE_RESERVED which is known > by the kernel: > > > if (entry->type == E820_TYPE_SOFT_RESERVED) > > memblock_reserve(entry->addr, entry->size); > > > > + if (entry->type == 20) > + memblock_reserve(entry->addr, entry->size); > > > if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) > > continue; > > > > This is however just to show the problem, I didn't check what type 20 > is. I think it's inveneted by your BIOS vendor :) > To me it doesn't look the root cause though, the root cause is that if > you don't call memblock_reserve the page->flags remains uninitialized. I didn't mean that root cause is that we don't call memblock_reserve(). I meant that the root cause is inconsitency in memory representation. On most architectures, memblock.memory represents the entire RAM in the system and memblock.reserved represents memory regions that were reserved either by the firmware or by the kernel during early boot. On x86 the memory that firmware reserved for its use is never considered memory and some of the reserved memory types are never registered with memblock at all. As memblock data is used to initialize the memory map, we end up with some page structs not being properly initialized. > I think the page_alloc.c need to be more robust and detect at least if > if holes within zones (but ideally all pfn_valid of all struct pages > in system even if beyond the end of the zone) aren't being initialized > in the second stage without relying on the arch code to remember to > call memblock_reserve. I agree that page_alloc.c needs to be more robust, but it anyway needs to rely on some data supplied by arch to know where valid memory is. With SPARSMEM, pfn_valid() only says where memmap exists, it's not necessary there is an actual page frame behind a valid pfn. > In fact it's not clear why memblock_reserve even exists, that > information can be calculated reliably by page_alloc in function of > memblock.memory alone by walking all nodes and all zones. It doesn't > even seem to help in destroying the direct mapping, > reserve_bootmem_region just initializes the struct pages so it doesn't > need a special memeblock_reserved to find those ranges. memblock_reserve() is there to allow architectures to mark memory regions as busy so this memory won't be used by buddy as free pages. It could be memory that firmware reported as reserved, memory occupied by the kernel image and initrd, or the early memory allocations kernel does before page allocator is up. > In fact it's scary that codes then does stuff like this trusting the > memblock_reserve is nearly complete information (which obviously isn't > given type 20 doesn't get queued and I got that type 20 in all my systems): > > for_each_reserved_mem_region(i, &start, &end) { > if (addr >= start && addr_end <= end) > return true; > } > > That code in irq-gic-v3-its.c should stop using > for_each_reserved_mem_region and start doing > pfn_valid(addr>>PAGE_SHIFT) if > PageReserved(pfn_to_page(addr>>PAGE_SHIFT)) instead. I think that for coldpluged CPUs this code runs before memmap us set up, so pfn_valid() or PageReserved() are not yet available then. > At best memory.reserved should be calculated automatically by the > page_alloc.c based on the zone_start_pfn/zone_end_pfn and not passed > by the e820 caller, instead of adding the memory_reserve call for type > 20 we should delete the memory_reserve function. memory.reserved cannot be calculated automatically. It represents all the memory allocations made before page allocator is up. And as memblock_reserve() is the most basic to allocate memory early at boot we cannot really delete it ;-) As for e820 and type 20, unless it is in memblock, page_alloc.c has no way to properly initialize memmap for it. It can continue to guess, like it does with init_unavailable_memory(). > Thanks, > Andrea > -- Sincerely yours, Mike.