From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63C95C433E0 for ; Sun, 14 Feb 2021 18:01:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E58764E4E for ; Sun, 14 Feb 2021 18:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229836AbhBNSBL (ORCPT ); Sun, 14 Feb 2021 13:01:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:52788 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbhBNSBI (ORCPT ); Sun, 14 Feb 2021 13:01:08 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0F32C64E29; Sun, 14 Feb 2021 18:00:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1613325628; bh=S06n2PIbfTRsWDpKfciyrnvAy2tq5pxRfgmsWuW/vZo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZDJeMVMx2d4ArohXm+c0OnCzB+MT/r5PvXgFkRNUujN/Q04nD8wqPHtp9oQMQvI2L ytg3BEA5sNzTurBf4LohDGmuHJeLogFx9OK1pDYGBnJoIaJ6lhBG+jE0PXjECP8nWY kG/8SAGVk7KKqriq74P+6j0vLAV68RtpqPHOQPU/uZygBXl40SF+bqLBh7R9vVEyl7 lwslILKLjBROR5FEl7UwdxDjFUb3llUhRzMwaXX/pYoAnvTedGLEe+KsxEakzHrFi8 IczE92l+WzqfUnpKyQeWl7lGZk+q7wrVcKBNDTo5NxKpYqWxGlzW8ntgBhBb+fBO6g Q7nKwklF5BGjg== Date: Sun, 14 Feb 2021 20:00:16 +0200 From: Mike Rapoport To: Michal Hocko Cc: David Hildenbrand , Andrew Morton , Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?utf-8?Q?=C5=81ukasz?= Majczak , Mel Gorman , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v5 1/1] mm: refactor initialization of struct page for holes in memory layout Message-ID: <20210214180016.GO242749@kernel.org> References: <20210208110820.6269-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 12, 2021 at 02:18:20PM +0100, Michal Hocko wrote: > On Fri 12-02-21 11:42:15, David Hildenbrand wrote: > > On 12.02.21 11:33, Michal Hocko wrote: > [...] > > > I have to digest this but my first impression is that this is more heavy > > > weight than it needs to. Pfn walkers should normally obey node range at > > > least. The first pfn is usually excluded but I haven't seen real > > > > We've seen examples where this is not sufficient. Simple example: > > > > Have your physical memory end within a memory section. Easy via QEMU, just > > do a "-m 4000M". The remaining part of the last section has fake/wrong > > node/zone info. > > Does this really matter though. If those pages are reserved then nobody > will touch them regardless of their node/zone ids. > > > Hotplug memory. The node/zone gets resized such that PFN walkers might > > stumble over it. > > > > The basic idea is to make sure that any initialized/"online" pfn belongs to > > exactly one node/zone and that the node/zone spans that PFN. > > Yeah, this sounds like a good idea but what is the poper node for hole > between two ranges associated with a different nodes/zones? This will > always be a random number. We should have a clear way to tell "do not > touch those pages" and PageReserved sounds like a good way to tell that. Nobody should touch reserved pages, but I don't think we can ensure that. We can correctly set the zone links for the reserved pages for holes in the middle of a zone based on the architecture constraints and with only the holes in the beginning/end of the memory will be not spanned by any node/zone which in practice does not seem to be a problem as the VM_BUG_ON in set_pfnblock_flags_mask() never triggered on pfn 0. I believe that any improvement in memory map consistency is a step forward. > > > problems with that. The VM_BUG_ON blowing up is really bad but as said > > > above we can simply make it less offensive in presence of reserved pages > > > as those shouldn't reach that path AFAICS normally. > > > > Andrea tried tried working around if via PG_reserved pages and it resulted > > in quite some ugly code. Andrea also noted that we cannot rely on any random > > page walker to do the right think when it comes to messed up node/zone info. > > I am sorry, I haven't followed previous discussions. Has the removal of > the VM_BUG_ON been considered as an immediate workaround? It was never discussed, but I'm not sure it's a good idea. Judging by the commit message that introduced the VM_BUG_ON (commit 86051ca5eaf5 ("mm: fix usemap initialization")) there was yet another inconsistency in the memory map that required a special care. -- Sincerely yours, Mike.