From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83579C4741F for ; Sun, 1 Nov 2020 17:02:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2A24822242 for ; Sun, 1 Nov 2020 17:02:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="bxsex5hs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A24822242 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 602D76B005C; Sun, 1 Nov 2020 12:02:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 58A976B005D; Sun, 1 Nov 2020 12:02:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 451BE6B0068; Sun, 1 Nov 2020 12:02:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 10E9D6B005C for ; Sun, 1 Nov 2020 12:02:33 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A9494181AC9BF for ; Sun, 1 Nov 2020 17:02:32 +0000 (UTC) X-FDA: 77436468144.20.nest25_0d0279e272a9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 78030180C07A3 for ; Sun, 1 Nov 2020 17:02:32 +0000 (UTC) X-HE-Tag: nest25_0d0279e272a9 X-Filterd-Recvd-Size: 5185 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Nov 2020 17:02:31 +0000 (UTC) Received: from kernel.org (unknown [87.71.17.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 420992074F; Sun, 1 Nov 2020 17:02:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604250150; bh=HP+ZQ6Ps8oAsoYLOvWAk4ExSXyhayG0jX2yWoBMUttE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=bxsex5hsLdBiTg92b0MNJxCbBPD+lO6NtdRjasPNMfUBXD6k5B0KcdxvFVYiQIEmX Bjfz+dfRQJnD6lyaCaBO97eSNhn6o7BX84zyBBhpcPltXScqdjcB1tCqstPvndRKPK MacevoYxMeN6oYR+VtqyVfF6MF53cPKygLmGPOBs= Date: Sun, 1 Nov 2020 19:02:17 +0200 From: Mike Rapoport To: "Edgecombe, Rick P" Cc: "david@redhat.com" , "cl@linux.com" , "gor@linux.ibm.com" , "hpa@zytor.com" , "peterz@infradead.org" , "catalin.marinas@arm.com" , "dave.hansen@linux.intel.com" , "borntraeger@de.ibm.com" , "penberg@kernel.org" , "linux-mm@kvack.org" , "iamjoonsoo.kim@lge.com" , "will@kernel.org" , "aou@eecs.berkeley.edu" , "kirill@shutemov.name" , "rientjes@google.com" , "rppt@linux.ibm.com" , "paulus@samba.org" , "hca@linux.ibm.com" , "bp@alien8.de" , "pavel@ucw.cz" , "sparclinux@vger.kernel.org" , "akpm@linux-foundation.org" , "luto@kernel.org" , "davem@davemloft.net" , "mpe@ellerman.id.au" , "benh@kernel.crashing.org" , "linuxppc-dev@lists.ozlabs.org" , "rjw@rjwysocki.net" , "tglx@linutronix.de" , "linux-riscv@lists.infradead.org" , "x86@kernel.org" , "linux-pm@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "palmer@dabbelt.com" , "Brown, Len" , "mingo@redhat.com" , "linux-s390@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "paul.walmsley@sifive.com" Subject: Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map Message-ID: <20201101170217.GD14628@kernel.org> References: <20201025101555.3057-1-rppt@kernel.org> <20201025101555.3057-3-rppt@kernel.org> <3b4b2b3559bd3dc68adcddf99415bae57152cb6b.camel@intel.com> <20201029075416.GJ1428094@kernel.org> <604554805defb03d158c09aba4b5cced3416a7fb.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <604554805defb03d158c09aba4b5cced3416a7fb.camel@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 29, 2020 at 11:19:18PM +0000, Edgecombe, Rick P wrote: > On Thu, 2020-10-29 at 09:54 +0200, Mike Rapoport wrote: > > __kernel_map_pages() on arm64 will also bail out if rodata_full is > > false: > > void __kernel_map_pages(struct page *page, int numpages, int enable) > > { > > if (!debug_pagealloc_enabled() && !rodata_full) > > return; > > > > set_memory_valid((unsigned long)page_address(page), numpages, > > enable); > > } > > > > So using set_direct_map() to map back pages removed from the direct > > map > > with __kernel_map_pages() seems safe to me. > > Heh, one of us must have some simple boolean error in our head. I hope > its not me! :) I'll try on more time. Well, then it's me :) You are right, I misread this and I could not understand why !rodata_full bothers you. > __kernel_map_pages() will bail out if rodata_full is false **AND** > debug page alloc is off. So it will only bail under conditions where > there could be nothing unmapped on the direct map. > > Equivalent logic would be: > if (!(debug_pagealloc_enabled() || rodata_full)) > return; > > Or: > if (debug_pagealloc_enabled() || rodata_full) > set_memory_valid(blah) > > So if either is on, the existing code will try to re-map. But the > set_direct_map_()'s will only work if rodata_full is on. So switching > hibernate to set_direct_map() will cause the remap to be missed for the > debug page alloc case, with !rodata_full. > > It also breaks normal debug page alloc usage with !rodata_full for > similar reasons after patch 3. The pages would never get unmapped. I've updated the patches, there should be no regression now. -- Sincerely yours, Mike.