From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BF2DC4363A for ; Tue, 27 Oct 2020 08:49:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58EDD22258 for ; Tue, 27 Oct 2020 08:49:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="cob/039d" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58EDD22258 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA3616B005D; Tue, 27 Oct 2020 04:49:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2C7B6B0062; Tue, 27 Oct 2020 04:49:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CD016B006C; Tue, 27 Oct 2020 04:49:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 6CDD46B005D for ; Tue, 27 Oct 2020 04:49:19 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1627B180AD801 for ; Tue, 27 Oct 2020 08:49:19 +0000 (UTC) X-FDA: 77417081238.21.shelf55_451246a2727a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id F0465180442C2 for ; Tue, 27 Oct 2020 08:49:18 +0000 (UTC) X-HE-Tag: shelf55_451246a2727a X-Filterd-Recvd-Size: 6007 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 08:49:18 +0000 (UTC) Received: from kernel.org (unknown [87.70.96.83]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 817B6207DE; Tue, 27 Oct 2020 08:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603788557; bh=vqvHVfwUem4JOBq/WXsfNE79L5Iz5nEhnMlkJj2ElFU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=cob/039dcWnn8OjH5If0m1K0e5hz+wi7uaTqafHhMC0w6AKDJ7mof/L9QVAWjAi5m AzwVpAcX/G3bIeFGz/QZ6LDXFrWjC3A32wjsC/UaCx/T35y4jeNZtg1G5OqKcQamFp mFhuBk5Uhn3ItuoN/wmMtQFg0IPWtBPQaAK5nUZY= Date: Tue, 27 Oct 2020 10:49:02 +0200 From: Mike Rapoport To: "Edgecombe, Rick P" Cc: "david@redhat.com" , "cl@linux.com" , "gor@linux.ibm.com" , "hpa@zytor.com" , "peterz@infradead.org" , "catalin.marinas@arm.com" , "dave.hansen@linux.intel.com" , "borntraeger@de.ibm.com" , "penberg@kernel.org" , "linux-mm@kvack.org" , "iamjoonsoo.kim@lge.com" , "will@kernel.org" , "aou@eecs.berkeley.edu" , "kirill@shutemov.name" , "rientjes@google.com" , "rppt@linux.ibm.com" , "paulus@samba.org" , "hca@linux.ibm.com" , "bp@alien8.de" , "pavel@ucw.cz" , "sparclinux@vger.kernel.org" , "akpm@linux-foundation.org" , "luto@kernel.org" , "davem@davemloft.net" , "mpe@ellerman.id.au" , "benh@kernel.crashing.org" , "linuxppc-dev@lists.ozlabs.org" , "rjw@rjwysocki.net" , "tglx@linutronix.de" , "linux-riscv@lists.infradead.org" , "x86@kernel.org" , "linux-pm@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "palmer@dabbelt.com" , "Brown, Len" , "mingo@redhat.com" , "linux-s390@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "paul.walmsley@sifive.com" Subject: Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map Message-ID: <20201027084902.GH1154158@kernel.org> References: <20201025101555.3057-1-rppt@kernel.org> <20201025101555.3057-3-rppt@kernel.org> <20201026091554.GB1154158@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 26, 2020 at 06:57:32PM +0000, Edgecombe, Rick P wrote: > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 12:38:32AM +0000, Edgecombe, Rick P wrote: > > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > > > From: Mike Rapoport > > > > > > > > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page > > > > may > > > > be > > > > not present in the direct map and has to be explicitly mapped > > > > before > > > > it > > > > could be copied. > > > > > > > > On arm64 it is possible that a page would be removed from the > > > > direct > > > > map > > > > using set_direct_map_invalid_noflush() but __kernel_map_pages() > > > > will > > > > refuse > > > > to map this page back if DEBUG_PAGEALLOC is disabled. > > > > > > It looks to me that arm64 __kernel_map_pages() will still attempt > > > to > > > map it if rodata_full is true, how does this happen? > > > > Unless I misread the code, arm64 requires both rodata_full and > > debug_pagealloc_enabled() to be true for __kernel_map_pages() to do > > anything. > > But rodata_full condition applies to set_direct_map_*_noflush() as > > well, > > so with !rodata_full the linear map won't be ever changed. > > Hmm, looks to me that __kernel_map_pages() will only skip it if both > debug pagealloc and rodata_full are false. > > But now I'm wondering if maybe we could simplify things by just moving > the hibernate unmapped page logic off of the direct map. On x86, > text_poke() used to use this reserved fixmap pte thing that it could > rely on to remap memory with. If hibernate had some separate pte for > remapping like that, then we could not have any direct map restrictions > caused by it/kernel_map_pages(), and it wouldn't have to worry about > relying on anything else. Well, there is map_kernel_range() that can be used by hibernation as there is no requirement for particular virtual address, but that would be quite costly if done for every page. Maybe we can do somthing like if (kernel_page_present(s_page)) { do_copy_page(dst, page_address(s_page)); } else { map_kernel_range_noflush(page_address(page), PAGE_SIZE, PROT_READ, &page); do_copy_page(dst, page_address(s_page)); unmap_kernel_range_noflush(page_address(page), PAGE_SIZE); } But it seems that a prerequisite for changing the way a page is mapped in safe_copy_page() would be to teach hibernation that a mapping here may fail. -- Sincerely yours, Mike.